Security Testing - Design Guidelines for Secure Web Applications
I would suggest to read this article before conducting security test. This will help you understand how and where to check for application vulnerabilities. You can create a check-list and record whether developers are following the below mentioned practices.
Web applications present designers and developers with many challenges. The stateless nature of HTTP means that tracking per-user session state becomes the responsibility of the application. As a precursor to this, the application must be able to identify the user by using some form of authentication. Given that all subsequent authorization decisions are based on the user's identity, it is essential that the authentication process is secure and that the session handling mechanism used to track authenticated users is equally well protected. Designing secure authentication and session management mechanisms are just a couple of the issues facing Web application designers and developers. Other challenges occur because input and output data passes over public networks. Preventing parameter manipulation and the disclosure of sensitive data are other top issues.
Attaching the screen shot that show the vulnerable places where hacker need to exploit.
Link to Microsoft site
---
Tuesday, August 24, 2010
Monday, August 23, 2010
Hacking - Change Your MAC & IP Address
Hacking - Change Your MAC & IP Address
MAC address and IP address are crucial in identifying your machine on the internet.
If you are able to mask this data, it will not be possible to trace back.
Hackers disguise this data when performing an attack.
All the IP traffic is sent in the form of packets, attaching an packet screen shot that shows the MAC and IP address of source and destination machines, you need to change these details to hide yourself.
Following are ways in changing it, you can search in Google for more ways.
---
MAC address and IP address are crucial in identifying your machine on the internet.
If you are able to mask this data, it will not be possible to trace back.
Hackers disguise this data when performing an attack.
All the IP traffic is sent in the form of packets, attaching an packet screen shot that shows the MAC and IP address of source and destination machines, you need to change these details to hide yourself.
Following are ways in changing it, you can search in Google for more ways.
---
Saturday, August 21, 2010
Security Testing - Enabling HTTPS doesn’t mean your site is secure
Security Testing - Enabling HTTPS doesn’t mean your site is secure.
Many people have wrong assumption that, if the site is HTTPS, it is very secure.
HTTPS protect the data during transit over the network (Internet), but do not protect before it is sent or after it arrives at the destination.
Using this weakness, hackers can exploit the server behavior by using HPP attack, SQL injection, cross site scripting...
Many people have wrong assumption that, if the site is HTTPS, it is very secure.
HTTPS protect the data during transit over the network (Internet), but do not protect before it is sent or after it arrives at the destination.
Using this weakness, hackers can exploit the server behavior by using HPP attack, SQL injection, cross site scripting...
Screen shot source: SSL and TLS Essentials: Securing the Web - by Stephen A. Thomas (WILEY)
---
Wednesday, August 18, 2010
VB Script - List out all the files in a folder and its sub-folders (Recursive function)
VB Script - List out all the files in a folder and its sub-folders (Recursive function)
Following VB Script program will list all the files(Including sub folders) in a selected folder. It would create a text file "AllFilesList" in the selected folder which contain folder path followed by list of files.
Attaching VB script code and generated text file screen shot.
(To execute the program, copy the code and save as .vbs)
---
Following VB Script program will list all the files(Including sub folders) in a selected folder. It would create a text file "AllFilesList" in the selected folder which contain folder path followed by list of files.
Attaching VB script code and generated text file screen shot.
(To execute the program, copy the code and save as .vbs)
Set objFSO = CreateObject("Scripting.FileSystemObject")
objStartFolder = "C:\Automation FrameWork\" 'Change the folder path as per your requirement.
Set oNotepad = objFSO.createtextfile(objStartFolder & "AllFilesList.txt") ' Output file created in the above pointed directory.
Set objFolder = objFSO.GetFolder(objStartFolder)
'Wscript.Echo objFolder.Path
Set colFiles = objFolder.Files
For Each objFile in colFiles
'Wscript.Echo objFile.Name
oNotepad.writeline(objFile.Name)
oNotepad.writeline()
Next
'Wscript.Echo
ShowSubfolders objFSO.GetFolder(objStartFolder)
oNotepad.Close
Wscript.Echo "Saved Successfully"
Sub ShowSubFolders(Folder)
For Each Subfolder in Folder.SubFolders
'Wscript.Echo Subfolder.Path
oNotepad.writeline(Subfolder.Path)
Set objFolder = objFSO.GetFolder(Subfolder.Path)
Set colFiles = objFolder.Files
For Each objFile in colFiles
'Wscript.Echo objFile.Name
oNotepad.writeline(objFile.Name)
Next
'Wscript.Echo
oNotepad.writeline("*********************")
ShowSubFolders Subfolder
Next
End Sub ---
Friday, August 13, 2010
Security Testing - HPP Attack (HTTP Parameter Pollution)
HTTP Parameter Pollution (HPP) Attack
HPP attack can be defined as process of modifying or exploiting the REQUEST post and url parameters and changing the application behavior. It is a serious attack which is underestimated.
It is classified into Client and Server side attack.
There are may tools available to perform this attack, but it can be performed in a better way using NeoLoad, as it expose parameters, request and response in great detail. Actually it is a load testing tool, you can download the trial version and play with it. It automatically handle session and cookies, just need to concentrate on tweaking the parameters. Attaching the tool screen shot displaying parameters, request and response for a request.
Following articles will help you under stand HPP in a better way.
Minded Security Blog
Minded Security Blog - Client side attack
HPP attach on Yahoo Mail
---
HPP attack can be defined as process of modifying or exploiting the REQUEST post and url parameters and changing the application behavior. It is a serious attack which is underestimated.
It is classified into Client and Server side attack.
There are may tools available to perform this attack, but it can be performed in a better way using NeoLoad, as it expose parameters, request and response in great detail. Actually it is a load testing tool, you can download the trial version and play with it. It automatically handle session and cookies, just need to concentrate on tweaking the parameters. Attaching the tool screen shot displaying parameters, request and response for a request.
Following articles will help you under stand HPP in a better way.
Minded Security Blog
Minded Security Blog - Client side attack
HPP attach on Yahoo Mail
---
Tuesday, August 10, 2010
Performance Testing - Setting Think time ZERO, doesn't mean executing the test with more users.
Performance Testing - Setting Think time ZERO, doesn't mean executing the test with more users.
Many people have wrong assumption, by decrease the think time, it is possible to create more load on the server. When think time is set as zero, virtual users are running at unrealistic speed.
The number of Virtual Users must be close to the number of real users once the application is in production, with a realistic think time applied between pages. Avoid testing with less Virtual Users with a minimized think time. It could be assumed that the result would be the same, as the number of requests played per second is identical. However, this is not the case, for the following reasons:
1. The memory burden on the server will be different: Each user session uses a certain amount of memory. If the number of user sessions is underestimated, the server will be running under more favorable conditions than in real-life and the results will be distorted.
2. The number of sockets open simultaneously on the server will be different. An underestimation of user numbers means the maximum threshold for open server sockets cannot be tested.
3. The resource pools (DB Connections) will not be operating under realistic conditions. An inappropriate pool size setting might not be detected during the test.
4. Removing think time can create artificial bottlenecks in your application.
When striving for accuracy, you want to always try to do things MORE like actual users rather than LESS. The only way to do this properly is to try to set all facets of a test to mimic real world traffic.
User Think Time is based upon the distribution with an Average of 7 Seconds and Maximum of 70 Seconds.
Article by Wayne D. Smith, Intel Corporation
----
Many people have wrong assumption, by decrease the think time, it is possible to create more load on the server. When think time is set as zero, virtual users are running at unrealistic speed.
The number of Virtual Users must be close to the number of real users once the application is in production, with a realistic think time applied between pages. Avoid testing with less Virtual Users with a minimized think time. It could be assumed that the result would be the same, as the number of requests played per second is identical. However, this is not the case, for the following reasons:
1. The memory burden on the server will be different: Each user session uses a certain amount of memory. If the number of user sessions is underestimated, the server will be running under more favorable conditions than in real-life and the results will be distorted.
2. The number of sockets open simultaneously on the server will be different. An underestimation of user numbers means the maximum threshold for open server sockets cannot be tested.
3. The resource pools (DB Connections) will not be operating under realistic conditions. An inappropriate pool size setting might not be detected during the test.
4. Removing think time can create artificial bottlenecks in your application.
When striving for accuracy, you want to always try to do things MORE like actual users rather than LESS. The only way to do this properly is to try to set all facets of a test to mimic real world traffic.
User Think Time is based upon the distribution with an Average of 7 Seconds and Maximum of 70 Seconds.
----
Sunday, August 8, 2010
NeoLoad Vs Loadrunner 9.50 (Ajax click & Script)
NeoLoad Vs Loadrunner (Ajax click & Script)
These days I have been working with NeoLoad, so I would like to show the comparison between NeoLoad and Loadrunner.
Link to NeoLoad
Link to Loadrunner
1. Size of the software.
Neoload - 140MB
Loadrunner - 2.2GB
2. Software installation.
NeoLoad - Quick, no dependency on other software and not require to restart the system.
Loadrunner - Need to install .Net, c++... before installing the actual software and need to restart the system multiple times.
3. Product developed in
NeoLoad - Java
Loadrunner - Microsoft Technologies and C
4. Supported OS
NeoLoad - Windows, Linux, Solaris
Loadrunner - Windows, Unix, lunix
5. Supported Platforms and technologies
Neoload:
Platform- .NET, J2EE
RIA - AJAX, FLEX, SilverLight, GWT, RTMP, Java Serialization, Push Technologies
Web Services: SOAP
ERP: SAP and Oracle Forms
Integration Products: DynaTrace
Link to NeoLoad Technologies
Loadrunner:
It support even more technologies, except SilverLight, GWT, Java Serialization, Push Technologies, dynaTrace. Following link contain complete details, HP sell the product in protocol bundle.
6. Scripting Language
Neoload - Scriptless - All GUI driven, so less mistakes and provide facility to execute java script for additional functionality.
Attaching Screen shot.
Loadrunner - C Language
7. Script recording method
Neoload - It record each request and response by using a proxy server, similar to Fiddler.
Loadrunner - It records user actions at GUI level. Mercury could implement QTP technology in this protocol by using limited object properties.
8. Virtual user execution process
Neoload - Process each request (Parallel or sequentially), once request is completed after receiving the response, it doesn't render and proceed with the next request. You can't access page DOM.
Loadrunner - It is a hidden browser, it works similar to the real browser by executing the web page code, so correlation is not required . Naturally consume more CPU and memory. For more details follow this link
9. Virtual user CPU and Memory consumption
NeoLoad - Very less as there is no rendering and client script execution.
Loadrunner - High as there is rendering and client script execution.
10. Handling third party and custom components.
Neoload - It can handle all the requests, but If the component send binary data, NeoLoad will only be able to play back what has been recorded unmodified. If this data contains session ID or other parameters requiring dynamic replacement, the test will not work.
Loadrunner - Not sure, I had lot of issues with FCK editor, UltraWebGrid, model pop-up...
10. Price
Neoload - Flat Price available on the web site (Select the Link), less than 1/3 of the loadrunner depending on the modules selected. Flex and dynaTrace are costly, remaining modules are less.
Loadrunner - It depends on the vendor, they sell in protocol bundle.
11. AJAX
Neoload - There is FORK action which can emulate asynchronous calls by using multi threading. I LOVE this functionality.
Loadrunner - There is no specific function to handle AJAX calls, they say it automatically handle, programmer has no control over it.
12. Handling dynamic content (Parametrization)
NeoLoad -It automatically handle common dynamic content using Framework dynamic parameters, you can add and update existing content based on the project requirement.
You can also use variable extractor for extracting dynamic content by specifying
a. Left Boundary(Start String) and Right Boundary( End String)
b. Xpath expression.
b. Regular expression.
There is a facility to test the dynamic content on the page response to validate your expression, also we can copy and paste the expressions, very matured no scope for errors. Attaching the screen shot.
Loadrunner - There is automatic correlation(Recording settings) and manual correlation using
web_reg_save_param, there is no way to test your expression.
13. Access page DOM.
NeoLoad - No
Loadrunner - Yes
14. Creating scripts and scenario and analyzing reports.
Neolaod - Creating, executing and analyzing the reports are done using single GUI, all the files are stored as one project. Awesome design, it has removed headache of maintaining separate script, scenario and report files.
Loadrunner - You need to open separate applications for script creation, scenario creation and analysis. As time pass maintaining these files is a real challenge.
15. Comparison Report
NeoLoad - You can run two tests and easily find the differences by using comparison report.
Loadrunner - Need to perform comparison manually.
16. Server counters monitoring
NeoLoad - It is easy, not required to type any credentials for accessing, just add your windows user id under "Performance Monitor users" group on the server machine.
You can also set threshold limits, system will automatically generate alerts based on the settings.
Loadrunner - Need credentials for accessing the server counters. For sophisticated monitoring you may need to purchase HP-SiteScope.
17. System defined variables
Neolaod - Variable change policy is good here, attaching the screen shot.
Loadrunner - Need to improve the variable change policy, attaching the screen shot.
18. GUI design in representing Request/Response
NeoLoad - For each request users can easily understand what are the POST and URL parameters and request, response is provided in TAB format. Attaching the screen shot.
Loadrunner - All the requests/response are bundled and put under "Generation Log"
19. Number of connections open simultaneously with the remote server per virtual user.
Neoload: Most browsers maintain two connections, we can configure to any number.
Loadrunner: Not sure.
20. Search and replace the content.
NeoLoad - Very sophisticated "Search and Replace" option, we can target specific content and perform the operation. Attaching the screen shots.
Loadrunner - Normal Search.
21. Validating server response.
How to validate server response ? (Select The Link)
NeoLoad - All the validations can be performed on one screen. Attaching the screen shot.
Loadrunner - Need to use code and GUI methods.
22. Flag or Mark Content
Neoload - If you are searching for a value and what to know in how may placed it is present? You can search for specific content, all these techniques are useful when dealing with dynamic content. You can perform the same operation while validating the script. Attaching screen shot.
Loadrunner - No such concept.
23. File upload Process
Neolaod - Just mention the file path in the post parameters, it is automatically copied to all the load generators during test execution.
Loadrunner - Mention the file path and manually copy the file in the specified location in all the load generators.
24. Load generators monitoring
Neoload - Automatically monitor CPU and Memory. It is always advisable to monitor load generators.
Loadrunner - Manually need to add the counters.
25. 32/64 Bit Load Generators
Neolaod - It has 32/64 Bit load generators. Better utilization of the hardware.
Loadrunner - Only 32 bit, it works on 64Bit machines, but in 32Bit mode.
26. Graph Template
NeoLoad - It is possible to create graph template, so that graphs will be arranged in the same way for all the results.
Loadrunner - Add graphs manually for each test.
27. Compare
NeoLoad - You can compare request, response with the actual recording and open the response in the browser if required. Attaching the screen shot.
Loadrunner - You can compare only the script, no option to compare request, response. Need to save the response in .html file and open in the browser if required.
28. Throughput
NeoLoad - Each request throughput is automatically captured, look at above screen shot.
Loadrunner - Need to use the function web_get_int_property( HTTP_INFO_DOWNLOAD_SIZE );
29. Security Testing
Neolaod - It can be used to perform HPP Attack.
Loadrunner - It can't be used because we can't manipulate request parameters..
30. File download process
NeoLoad - File is downloaded, but not stored in the load generators. It just record the total number of bytes downloaded, there is an option to insert a check point (Assertion).
Loadrunner - Actual file is downloaded on the load generators, this will help us to check the integrity of the file downloaded. But this is a complex process require extra coding.
31. Calculate page response time.
NeoLoad - It just capture the request, response time. It can't calculate page rendering time, you need to run QTP or Selenium test in parallel to capture the TRUE page response time.
How to capture page response time using QTP?
How to capture page response time using Selenium?
Loadrunner - It capture TRUE page response time which include request, response and page rendering time.
32. Creating transactions
NeoLoad - There is no facility to create start and end transactions to capture the response time. Each and every response time is automatically recorded using continues timer, need to group the requests into "containers" to capture the aggregate time. I felt NeoLoad need to improve here. I can't present the results directly, need to copy in an excel file in-order to present the results. Also there is no "Raw Data" provided, if we want to know the response time of each transaction. Just need to rely on the NeoLoad computation of average, 90%.
Loadrunner - Summary report contain all the transactions defined on the code that are easy to present in the report. There is a facility to download all the transaction response time in an excel format.
33. Parameterization
NeoLoad - All the server generated dynamic values need to me parametrized. There is an automatic parameterization at technology level(.Net, Java...) but this will not solve the problem completely, need to handle manually.
Loadrunner - No parameterization is required, loadrunner work like a real browser. Based on the requirements you many need to capture the response content and parameterize LR function. This is were loadrunner dominates other load testing tools. Same script can be successfully executed in any environment (Development, QC, Production having different URLs and test data), just by changing the URL at one place.
NeoLoad has answered this question by implementing the concepts of "Servers", where host name is captured in a variable that can be easily changed when required. In-order to implement this successfully, need to parameterize the content at very detailed level because each environment had different test data, so it had different dynamic values.
35. IP Spoofing
NeoLoad - Yes
Loadrunner - Yes
36. Specific Virtual user Stop/Restart
NeoLoad - No
Loadrunner - Yes
37. WAN Emulators
What are WAN emulators?
NeoLoad - There is no facility to add network effects, just limit bandwidth.
Loadrunner - It has integration with SHUNRA Virtual Enterprise Suite to generate network effects.
38. Cloud Support
What is cloud load testing?
NeoLoad - Yes
Loadrunner - Yes.
39. JavaScript editor
NeoLoad - It has editor where we can compile the script for errors.
Loadrunner - No editor, need to write error free code. How to write JavaScript?
40. Share data between virtual users
NeoLoad - Shared queues
Loadrunner - Virtual Table Server
41. Customer Support
NeoLoad - Excellent, very cooperative in answering all the questions. We could get 1 day - 500 users trial version to actually check the tool performance.
Loadrunner - I can't rate.
NeoLoad is one of the BEST tool available in the market for load-testing in terms of price, support and compatibility with web 2.0 .
I would suggest following improvements for NeoLoad.
1. Option to Enable/Disable a specific request quickly, I think this will help during debugging.
2. Ability to monitor PASSED/FAILED transactions count separately (In Neoload Containers and assertions) during load test execution, these counters quickly asses the health of the test and analyze how many requests got processed, especially useful when requests execute in loop. We can get this count in test results after completing the test, I am more concerned during execution. For example
3. It is possible to generate load from cloud through partnership, ability to deploy Load generators on cloud using Amazon EC2 or GoGrid, NeoLoad will be running on our premises, only the load-generators deployed on the cloud.
4. Currently tool is able to capture the response size, it would be better if it can display the request size.
5. It is possible to capture the page response time by grouping all the requests into a container, but it would be better by using Start and End transaction to capture the response time. If there are multiple scripts having same login procedure, I can define same transaction names in these two scripts, so that I can get aggregated results, instead of two separate login response time results. Also it would be great if the transaction values are dynamically updated during the test.
6. While processing the dynamic parameters there is a progress bar which show system is busy identifying the content, but it is not showing how much % got completed. It would be better to show % completed (out of 500 requests, processed 100), this will be useful if the script size is big.
7. In some cases only, NeoLoad algorithm gets confused while recording HTTP 302 response, need to alter the code manually.
8. In some cases, While or Loop statement followed by If condition will not work. It works fine when we check the code through virtual user validity, it doesn't work when we run the same code in scenario.
9. Ability to automatically add this variable "context.variableManager.setValue("computedVar",computedValue);" in the variable picker, so that we can get in auto-suggest.
10. Provision for sending messages to the NeoLoad console during test execution. If you are running the test with 100 users and what to know what Vusers are doing, you can send small messages to the console saying In Login Page, Updating, deleting, I logged in with 1test.com...
"lr_vuser_status_message" in Loadrunner. Attaching screenshot for more clarity.

11. Ability to provide raw values, list of the transactions response time and status that can be downloaded in excel format, so that we can compute the averages manually if required. Currently NoeLoad compute the average values and display it on the screen, what is the guarantee that it is calculating correctly or detailed invitational of each transaction. I think you people are building light weight controller, all these things make heavy.
12. Facility to display values in Output console(Small window to display values, just like log file but on the screen) during script validation, especially to check JavaScript variables content OR other values for debugging purpose. Similar to "Print" utility in HP Quick Test Professional. To over come this limitation I am using a dirty way.
13. I have not seen any option in NeoLoad where I can define maximum time a Request has to wait for an response.
14. Ability to Import/Export variable regular expressions from "Advance Parameters" window. You have provided the provision to add it to the framework that apply globally, but there are instances where I need to put this in few pages only. We can use copy method, but I need to create all those regular expression before copying to other required pages. I have seen instances where same dynamic value appear between different right and left boundaries on different pages, in this case I feel it would be useful.
15. There is no Minimum and Maximum values columns in the Runtime graphs.
16. Facility to Restart/Stop specific virtual user during test execution.
---
These days I have been working with NeoLoad, so I would like to show the comparison between NeoLoad and Loadrunner.
Link to NeoLoad
Link to Loadrunner
1. Size of the software.
Neoload - 140MB
Loadrunner - 2.2GB
2. Software installation.
NeoLoad - Quick, no dependency on other software and not require to restart the system.
Loadrunner - Need to install .Net, c++... before installing the actual software and need to restart the system multiple times.
3. Product developed in
NeoLoad - Java
Loadrunner - Microsoft Technologies and C
4. Supported OS
NeoLoad - Windows, Linux, Solaris
Loadrunner - Windows, Unix, lunix
5. Supported Platforms and technologies
Neoload:
Platform- .NET, J2EE
RIA - AJAX, FLEX, SilverLight, GWT, RTMP, Java Serialization, Push Technologies
Web Services: SOAP
ERP: SAP and Oracle Forms
Integration Products: DynaTrace
Link to NeoLoad Technologies
Loadrunner:
It support even more technologies, except SilverLight, GWT, Java Serialization, Push Technologies, dynaTrace. Following link contain complete details, HP sell the product in protocol bundle.
6. Scripting Language
Neoload - Scriptless - All GUI driven, so less mistakes and provide facility to execute java script for additional functionality.
Attaching Screen shot.
Loadrunner - C Language
7. Script recording method
Neoload - It record each request and response by using a proxy server, similar to Fiddler.
Loadrunner - It records user actions at GUI level. Mercury could implement QTP technology in this protocol by using limited object properties.
8. Virtual user execution process
Neoload - Process each request (Parallel or sequentially), once request is completed after receiving the response, it doesn't render and proceed with the next request. You can't access page DOM.
Loadrunner - It is a hidden browser, it works similar to the real browser by executing the web page code, so correlation is not required . Naturally consume more CPU and memory. For more details follow this link
9. Virtual user CPU and Memory consumption
NeoLoad - Very less as there is no rendering and client script execution.
Loadrunner - High as there is rendering and client script execution.
10. Handling third party and custom components.
Neoload - It can handle all the requests, but If the component send binary data, NeoLoad will only be able to play back what has been recorded unmodified. If this data contains session ID or other parameters requiring dynamic replacement, the test will not work.
Loadrunner - Not sure, I had lot of issues with FCK editor, UltraWebGrid, model pop-up...
10. Price
Neoload - Flat Price available on the web site (Select the Link), less than 1/3 of the loadrunner depending on the modules selected. Flex and dynaTrace are costly, remaining modules are less.
Loadrunner - It depends on the vendor, they sell in protocol bundle.
11. AJAX
Neoload - There is FORK action which can emulate asynchronous calls by using multi threading. I LOVE this functionality.
Loadrunner - There is no specific function to handle AJAX calls, they say it automatically handle, programmer has no control over it.
12. Handling dynamic content (Parametrization)
NeoLoad -It automatically handle common dynamic content using Framework dynamic parameters, you can add and update existing content based on the project requirement.
You can also use variable extractor for extracting dynamic content by specifying
a. Left Boundary(Start String) and Right Boundary( End String)
b. Xpath expression.
b. Regular expression.
There is a facility to test the dynamic content on the page response to validate your expression, also we can copy and paste the expressions, very matured no scope for errors. Attaching the screen shot.
Loadrunner - There is automatic correlation(Recording settings) and manual correlation using
web_reg_save_param, there is no way to test your expression.
13. Access page DOM.
NeoLoad - No
Loadrunner - Yes
14. Creating scripts and scenario and analyzing reports.
Neolaod - Creating, executing and analyzing the reports are done using single GUI, all the files are stored as one project. Awesome design, it has removed headache of maintaining separate script, scenario and report files.
Loadrunner - You need to open separate applications for script creation, scenario creation and analysis. As time pass maintaining these files is a real challenge.
15. Comparison Report
NeoLoad - You can run two tests and easily find the differences by using comparison report.
Loadrunner - Need to perform comparison manually.
16. Server counters monitoring
NeoLoad - It is easy, not required to type any credentials for accessing, just add your windows user id under "Performance Monitor users" group on the server machine.
You can also set threshold limits, system will automatically generate alerts based on the settings.
Loadrunner - Need credentials for accessing the server counters. For sophisticated monitoring you may need to purchase HP-SiteScope.
17. System defined variables
Neolaod - Variable change policy is good here, attaching the screen shot.
Loadrunner - Need to improve the variable change policy, attaching the screen shot.
18. GUI design in representing Request/Response
NeoLoad - For each request users can easily understand what are the POST and URL parameters and request, response is provided in TAB format. Attaching the screen shot.
Loadrunner - All the requests/response are bundled and put under "Generation Log"
19. Number of connections open simultaneously with the remote server per virtual user.
Neoload: Most browsers maintain two connections, we can configure to any number.
Loadrunner: Not sure.
20. Search and replace the content.
NeoLoad - Very sophisticated "Search and Replace" option, we can target specific content and perform the operation. Attaching the screen shots.
Loadrunner - Normal Search.
21. Validating server response.
How to validate server response ? (Select The Link)
NeoLoad - All the validations can be performed on one screen. Attaching the screen shot.
Loadrunner - Need to use code and GUI methods.
22. Flag or Mark Content
Neoload - If you are searching for a value and what to know in how may placed it is present? You can search for specific content, all these techniques are useful when dealing with dynamic content. You can perform the same operation while validating the script. Attaching screen shot.
Loadrunner - No such concept.
23. File upload Process
Neolaod - Just mention the file path in the post parameters, it is automatically copied to all the load generators during test execution.
Loadrunner - Mention the file path and manually copy the file in the specified location in all the load generators.
24. Load generators monitoring
Neoload - Automatically monitor CPU and Memory. It is always advisable to monitor load generators.
Loadrunner - Manually need to add the counters.
25. 32/64 Bit Load Generators
Neolaod - It has 32/64 Bit load generators. Better utilization of the hardware.
Loadrunner - Only 32 bit, it works on 64Bit machines, but in 32Bit mode.
26. Graph Template
NeoLoad - It is possible to create graph template, so that graphs will be arranged in the same way for all the results.
Loadrunner - Add graphs manually for each test.
27. Compare
NeoLoad - You can compare request, response with the actual recording and open the response in the browser if required. Attaching the screen shot.
Loadrunner - You can compare only the script, no option to compare request, response. Need to save the response in .html file and open in the browser if required.
28. Throughput
NeoLoad - Each request throughput is automatically captured, look at above screen shot.
Loadrunner - Need to use the function web_get_int_property( HTTP_INFO_DOWNLOAD_SIZE );
29. Security Testing
Neolaod - It can be used to perform HPP Attack.
Loadrunner - It can't be used because we can't manipulate request parameters..
30. File download process
NeoLoad - File is downloaded, but not stored in the load generators. It just record the total number of bytes downloaded, there is an option to insert a check point (Assertion).
Loadrunner - Actual file is downloaded on the load generators, this will help us to check the integrity of the file downloaded. But this is a complex process require extra coding.
31. Calculate page response time.
NeoLoad - It just capture the request, response time. It can't calculate page rendering time, you need to run QTP or Selenium test in parallel to capture the TRUE page response time.
How to capture page response time using QTP?
How to capture page response time using Selenium?
Loadrunner - It capture TRUE page response time which include request, response and page rendering time.
32. Creating transactions
NeoLoad - There is no facility to create start and end transactions to capture the response time. Each and every response time is automatically recorded using continues timer, need to group the requests into "containers" to capture the aggregate time. I felt NeoLoad need to improve here. I can't present the results directly, need to copy in an excel file in-order to present the results. Also there is no "Raw Data" provided, if we want to know the response time of each transaction. Just need to rely on the NeoLoad computation of average, 90%.
Loadrunner - Summary report contain all the transactions defined on the code that are easy to present in the report. There is a facility to download all the transaction response time in an excel format.
33. Parameterization
NeoLoad - All the server generated dynamic values need to me parametrized. There is an automatic parameterization at technology level(.Net, Java...) but this will not solve the problem completely, need to handle manually.
Loadrunner - No parameterization is required, loadrunner work like a real browser. Based on the requirements you many need to capture the response content and parameterize LR function. This is were loadrunner dominates other load testing tools. Same script can be successfully executed in any environment (Development, QC, Production having different URLs and test data), just by changing the URL at one place.
NeoLoad has answered this question by implementing the concepts of "Servers", where host name is captured in a variable that can be easily changed when required. In-order to implement this successfully, need to parameterize the content at very detailed level because each environment had different test data, so it had different dynamic values.
35. IP Spoofing
NeoLoad - Yes
Loadrunner - Yes
36. Specific Virtual user Stop/Restart
NeoLoad - No
Loadrunner - Yes
37. WAN Emulators
What are WAN emulators?
NeoLoad - There is no facility to add network effects, just limit bandwidth.
Loadrunner - It has integration with SHUNRA Virtual Enterprise Suite to generate network effects.
38. Cloud Support
What is cloud load testing?
NeoLoad - Yes
Loadrunner - Yes.
39. JavaScript editor
NeoLoad - It has editor where we can compile the script for errors.
Loadrunner - No editor, need to write error free code. How to write JavaScript?
40. Share data between virtual users
NeoLoad - Shared queues
Loadrunner - Virtual Table Server
41. Customer Support
NeoLoad - Excellent, very cooperative in answering all the questions. We could get 1 day - 500 users trial version to actually check the tool performance.
Loadrunner - I can't rate.
NeoLoad is one of the BEST tool available in the market for load-testing in terms of price, support and compatibility with web 2.0 .
I would suggest following improvements for NeoLoad.
1. Option to Enable/Disable a specific request quickly, I think this will help during debugging.
2. Ability to monitor PASSED/FAILED transactions count separately (In Neoload Containers and assertions) during load test execution, these counters quickly asses the health of the test and analyze how many requests got processed, especially useful when requests execute in loop. We can get this count in test results after completing the test, I am more concerned during execution. For example
3. It is possible to generate load from cloud through partnership, ability to deploy Load generators on cloud using Amazon EC2 or GoGrid, NeoLoad will be running on our premises, only the load-generators deployed on the cloud.
4. Currently tool is able to capture the response size, it would be better if it can display the request size.
5. It is possible to capture the page response time by grouping all the requests into a container, but it would be better by using Start and End transaction to capture the response time. If there are multiple scripts having same login procedure, I can define same transaction names in these two scripts, so that I can get aggregated results, instead of two separate login response time results. Also it would be great if the transaction values are dynamically updated during the test.
6. While processing the dynamic parameters there is a progress bar which show system is busy identifying the content, but it is not showing how much % got completed. It would be better to show % completed (out of 500 requests, processed 100), this will be useful if the script size is big.
7. In some cases only, NeoLoad algorithm gets confused while recording HTTP 302 response, need to alter the code manually.
8. In some cases, While or Loop statement followed by If condition will not work. It works fine when we check the code through virtual user validity, it doesn't work when we run the same code in scenario.
9. Ability to automatically add this variable "context.variableManager.setValue("computedVar",computedValue);" in the variable picker, so that we can get in auto-suggest.
10. Provision for sending messages to the NeoLoad console during test execution. If you are running the test with 100 users and what to know what Vusers are doing, you can send small messages to the console saying In Login Page, Updating, deleting, I logged in with 1test.com...
"lr_vuser_status_message" in Loadrunner. Attaching screenshot for more clarity.

11. Ability to provide raw values, list of the transactions response time and status that can be downloaded in excel format, so that we can compute the averages manually if required. Currently NoeLoad compute the average values and display it on the screen, what is the guarantee that it is calculating correctly or detailed invitational of each transaction. I think you people are building light weight controller, all these things make heavy.
12. Facility to display values in Output console(Small window to display values, just like log file but on the screen) during script validation, especially to check JavaScript variables content OR other values for debugging purpose. Similar to "Print" utility in HP Quick Test Professional. To over come this limitation I am using a dirty way.
13. I have not seen any option in NeoLoad where I can define maximum time a Request has to wait for an response.
14. Ability to Import/Export variable regular expressions from "Advance Parameters" window. You have provided the provision to add it to the framework that apply globally, but there are instances where I need to put this in few pages only. We can use copy method, but I need to create all those regular expression before copying to other required pages. I have seen instances where same dynamic value appear between different right and left boundaries on different pages, in this case I feel it would be useful.
15. There is no Minimum and Maximum values columns in the Runtime graphs.
16. Facility to Restart/Stop specific virtual user during test execution.
---
Wednesday, July 7, 2010
Selenium - Functional, Performance testing tool.
Selenium - Functional, Performance testing tool.
Selenium is powerful open source tool for testing (Functional) web applications; it is similar to HP QTP (Quick Test Professional).
It provide full control on the web pages, by allowing access to view source, DOM elements and complete navigation(clicks, selecting links etc) through different API’s.
Selenium IDE is a firefox add-on that can record the user actions in the form of script, which can reply again in the browser. It can also convert the recorded script in different programming languages.
We can integrate Selenium Remote Control(RC) with different programming languages (.Net, Python, Perl, Ruby, Java) and execute the selenium scripts.
Some companies link BrowserMob, PushToTest use Selenium GRID to conduct performance testing. It consume more resources and not a good idea for conducting performance testing.
Link to All Selenium Projects
It is always worth learning these open source tools, not required to pay license fee.
Selenium IDE is a firefox add-on that can record the user actions in the form of script, which can reply again in the browser. It can also convert the recorded script in different programming languages.
We can integrate Selenium Remote Control(RC) with different programming languages (.Net, Python, Perl, Ruby, Java) and execute the selenium scripts.
Some companies link BrowserMob, PushToTest use Selenium GRID to conduct performance testing. It consume more resources and not a good idea for conducting performance testing.
Link to All Selenium Projects
It is always worth learning these open source tools, not required to pay license fee.
----
Tuesday, July 6, 2010
Performance Testing - TCP Connection Failures
Performance Testing - TCP Connection Failures.
I came across this article on WebPerformanceInc, which explain about establishing TCP connection and different reasons for connection failures…felt interesting.
Load Tester is a web site load testing tool, and as such we deal primarily with the most popular Internet communications protocol: the Hypertext Transfer Protocol, or HTTP, which controls the request and transmission of web pages between browser clients and web servers. HTTP is based on a lower-level protocol known as the Transmission Control Protocol, or TCP. For the most part, TCP works in the background, but its proper function is critical to your website, and problems at the TCP level can show up in many different ways during a load test. These errors can sometimes be difficult to troubleshoot, requiring a packet sniffer such as Wireshark or tcpdump to analyze, while others are simpler.
TCP uses the concept of “ports” to identify and organize connections. For every TCP connection, there are two ports – the source port, and the destination port. For our purposes, the most important ports are port 80 and port 443, which are the two most common ports utilized by web servers – 80 for normal HTTP traffic, and 443 for SSL-encrypted traffic. A typical TCP connection from a client to a webserver will involve a random source port such as 44567, and a destination port on the server of port 80. Each web server can accept many hundreds of connections on port 80, but each connection must come from a different source port on each client.
To create these connections between ports, TCP relies on a three-way handshake. The requesting client first sends a packet with the TCP SYN flag set, indicating that it wants to open a connection. If the server has a process listening on the destination port, it will respond with a packet that has both the SYN flag set and the ACK flag set, which acknowledges the client’s SYN and indicates that a connection can be created on that port. The client then sends a packet with the ACK flag set back to the server, and the connection is established. The current connections can be viewed using the netstat tool on both Windows and Linux.
What does it look like when a TCP connection attempt fails? The TCP packet with the SYN flag is sent from the client, which in our case is a load engine. If the server sees such a packet, but does not have a process listening on the target port, it will typically respond with a TCP packet that has the ACK and RST flags set – a TCP reset. This tells the client that connections are not available on this port.

This screenshot shows the result of a load engine failing to connect to the server. In this case, you can see that I attempted to connect to TCP port 442, which doesn’t have a web server running on it (or any other service, for that matter). Note that the response was received quickly, in about 1 second, indicating that the remote server saw the ill-fated packet and responded. The most important thing to know about this error is that it is one of the most reliable errors that you’ll see – either the Load Tester controller or the load engine really is having trouble connecting to the site. The most common reason for this is that either the site is down, or there is a firewall that is blocking the load engine but not the controller.
So … what happens when the remote server does not respond?

This screenshot shows the same attempted connection, only this time, no response was received from the target server – not even the TCP reset that indicates connections are not available on the target port. Note how long it takes for Load Tester to report an error – 21 seconds, in this case. I induced this error by configuring the Linux iptables firewall to drop all incoming packets on TCP port 442, so the server’s TCP stack never saw the incoming SYN packet and thus did not respond to it – from the server’s perspective, the packet never arrived. A similar error will occur if the server cannot be reached for some reason; for example if you attempt to connect to the wrong hostname, the server is offline, or your traffic is being misrouted between the client or load engine and the server. If you see these kinds of errors, then the first thing you should do is make sure that the server is up, and that any HTTP proxy servers necessary to reach the server are configured correctly.
Of course, TCP connections can also fail after a connection has been established. Here’s an example:

This error message is much less clear. Did the server close the connection on purpose? If so, why? If not, what happened? Did the process handling the server connection crash or return bad data? In this case, it’s useful to know what Load Tester considers to be a successful connection. Load Tester expects there to be HTTP headers, followed by data. In this case, we did not finish receiving the HTTP headers, and so Load Tester considers the connection incomplete. Load Tester failed to receive the headers in this case because I induced this error by attempting to elicit an HTTP response from the Secure Shell (ssh) service listening on TCP port 22, which terminated the connection after receiving what it saw as invalid data – Load Tester’s HTTP request.
In a real test, there’s a pretty large number of things that can cause this error, from server process crashes or errors, to overly aggressive firewalls, to reverse proxy failures, to misdirected traffic on a load balancer. In such a case, a traffic analyzer such as Wireshark or tcpdump can be very helpful in determining what is happening. Note that you may need to observe traffic in more locations that in front of the load engine or the controller though, as traffic can be altered by firewalls and load balancers.
----
I came across this article on WebPerformanceInc, which explain about establishing TCP connection and different reasons for connection failures…felt interesting.
Load Tester is a web site load testing tool, and as such we deal primarily with the most popular Internet communications protocol: the Hypertext Transfer Protocol, or HTTP, which controls the request and transmission of web pages between browser clients and web servers. HTTP is based on a lower-level protocol known as the Transmission Control Protocol, or TCP. For the most part, TCP works in the background, but its proper function is critical to your website, and problems at the TCP level can show up in many different ways during a load test. These errors can sometimes be difficult to troubleshoot, requiring a packet sniffer such as Wireshark or tcpdump to analyze, while others are simpler.
TCP uses the concept of “ports” to identify and organize connections. For every TCP connection, there are two ports – the source port, and the destination port. For our purposes, the most important ports are port 80 and port 443, which are the two most common ports utilized by web servers – 80 for normal HTTP traffic, and 443 for SSL-encrypted traffic. A typical TCP connection from a client to a webserver will involve a random source port such as 44567, and a destination port on the server of port 80. Each web server can accept many hundreds of connections on port 80, but each connection must come from a different source port on each client.
To create these connections between ports, TCP relies on a three-way handshake. The requesting client first sends a packet with the TCP SYN flag set, indicating that it wants to open a connection. If the server has a process listening on the destination port, it will respond with a packet that has both the SYN flag set and the ACK flag set, which acknowledges the client’s SYN and indicates that a connection can be created on that port. The client then sends a packet with the ACK flag set back to the server, and the connection is established. The current connections can be viewed using the netstat tool on both Windows and Linux.
What does it look like when a TCP connection attempt fails? The TCP packet with the SYN flag is sent from the client, which in our case is a load engine. If the server sees such a packet, but does not have a process listening on the target port, it will typically respond with a TCP packet that has the ACK and RST flags set – a TCP reset. This tells the client that connections are not available on this port.
Load Tester showing a connection refused (ACK RST)
So … what happens when the remote server does not respond?
Load Tester showing a connection timeout (dropped packet)
Of course, TCP connections can also fail after a connection has been established. Here’s an example:
Load Tester showing a server connection termination
In a real test, there’s a pretty large number of things that can cause this error, from server process crashes or errors, to overly aggressive firewalls, to reverse proxy failures, to misdirected traffic on a load balancer. In such a case, a traffic analyzer such as Wireshark or tcpdump can be very helpful in determining what is happening. Note that you may need to observe traffic in more locations that in front of the load engine or the controller though, as traffic can be altered by firewalls and load balancers.
----
Velocity - Web Performance Conference 2010
Velocity - Web Performance Conference 2010
Oreilly’s Velocity Conference exclusively for website performance and testing.
Oreilly’s Velocity Conference exclusively for website performance and testing.
Metrics 101
View more presentations from Alistair Croll.
There are around 20 videos relating to this conference on YouTube.
----
There are around 20 videos relating to this conference on YouTube.
----
Sunday, July 4, 2010
Evaluation of computing
Evaluation of computing
30 minutes video, that explain the evaluation of computing from primitive stage, where there are no CRT monitors, user need to use paper as input and output medium ...interesting to watch.
30 minutes video, that explain the evaluation of computing from primitive stage, where there are no CRT monitors, user need to use paper as input and output medium ...interesting to watch.
Performance Testing - Why site could be slow, even with low CPU/RAM/disk utilization.
Why site could be slow, even with low CPU/RAM/disk utilization.
Some times site appeared to slow down significantly, despite the fact that their CPU, RAM, and disk utilization did not rise in utilization significantly. While those three metrics are often good indicators of why systems can “slow down”, there are many other causes of performance problems. Today, we’re going to discuss one common root cause for slow websites that often gets overlooked: connection management.
Until very recently, most web browsers would only issue a maximum of two connections per host, as per the recommendation by the original HTTP/1.1 specification. This meant that if 1000 users all hit your home page at the same time, you could expect ~2000 open connections to your server. Let’s suppose that each connection consumes, on average, 0.01% of the server’s CPU and no significant RAM or disk activity.
That would mean that 2000 connections should be consuming 20% of the CPU, leaving a full 80% ready to handle additional load – or that the server should be able to handle another 4X load (4000 more users). However, this type of analysis fails to account for many other variables, most importantly the web server’s connection management settings.
Just about every web server available today (Apache, IIS, nginx, lighthttpd, etc) has one or more settings that control how connections are handle. This includes connection pooling, maximum allowed connections, Keep-Alive timeout values, etc. They all work basically the same way:
The simple solution is to raise the concurrent request limit. However, be careful here: if you raise it too high it’s possible your server won’t have enough CPU or RAM to handle all the requests, resulting in all users be affected (rather than just some of them, like in the last example).
Also remember that not all requests are equal: a request to a dynamic search result will be much more expensive than one to a static CSS file. This is why larger sites optimize their hosting to place static files on special web servers with different configurations, usually with host names like images.example.com, while leaving their more complex content to be handled by a larger quantity of servers with a fewer number of concurrent requests on each server.
So next time you’re wondering why your site is slow, take a look at more than just CPU and RAM. Find out how the server is processing the content and see if perhaps your web server is the bottleneck.
Source: browsermob
---
Some times site appeared to slow down significantly, despite the fact that their CPU, RAM, and disk utilization did not rise in utilization significantly. While those three metrics are often good indicators of why systems can “slow down”, there are many other causes of performance problems. Today, we’re going to discuss one common root cause for slow websites that often gets overlooked: connection management.
Until very recently, most web browsers would only issue a maximum of two connections per host, as per the recommendation by the original HTTP/1.1 specification. This meant that if 1000 users all hit your home page at the same time, you could expect ~2000 open connections to your server. Let’s suppose that each connection consumes, on average, 0.01% of the server’s CPU and no significant RAM or disk activity.
That would mean that 2000 connections should be consuming 20% of the CPU, leaving a full 80% ready to handle additional load – or that the server should be able to handle another 4X load (4000 more users). However, this type of analysis fails to account for many other variables, most importantly the web server’s connection management settings.
Just about every web server available today (Apache, IIS, nginx, lighthttpd, etc) has one or more settings that control how connections are handle. This includes connection pooling, maximum allowed connections, Keep-Alive timeout values, etc. They all work basically the same way:
- When a request (connection) comes in to the server, the server will look at the maximum active connections setting (ie: MaxClients in Apache) and decide if it can handle the request.
- If it can, the request is processed and the number of active connections is incremented by one.
- If it can’t, the request is placed in to a queue, where it will wait in line until it finally can be processed.
- If that queue is too long (also a configuration setting in the server), the request will be rejected outright, usually with a 503 response code.
The simple solution is to raise the concurrent request limit. However, be careful here: if you raise it too high it’s possible your server won’t have enough CPU or RAM to handle all the requests, resulting in all users be affected (rather than just some of them, like in the last example).
Also remember that not all requests are equal: a request to a dynamic search result will be much more expensive than one to a static CSS file. This is why larger sites optimize their hosting to place static files on special web servers with different configurations, usually with host names like images.example.com, while leaving their more complex content to be handled by a larger quantity of servers with a fewer number of concurrent requests on each server.
So next time you’re wondering why your site is slow, take a look at more than just CPU and RAM. Find out how the server is processing the content and see if perhaps your web server is the bottleneck.
Source: browsermob
---
Performance Testing - TTFB (Time to First Byte)
Performance Testing - TTFB, TTLB
When you open any web page, there are series of requests from the client(usually a web browser) and response from the server.
TTFB (Time to First Byte) - Amount of time it took for the client (usually a web browser) to receive the first byte of server response.
TTLB (Time to Last Byte) - Amount of time it took for the client (usually a web browser) to receive the last byte of server response i.e total time taken to download an object
Both metrics are used in performance testing for analyzing the bottleneck.
I will explain the metrics by taking an example.
Both the requests appear to be hosted on two different servers.
Both the requests tool almost 1 second to complete, but "pic.png" having file size four times grater than the "login.jsp". What is going on? In-order to understand the complete story, we need to know TTFB.
With this additional information, we can understand what is happening.
In case of "login.jsp" after receiving the first byte, it took 09ms to download the remaining content.
Where as "pic.png" received the first byte very quickly but took another 900ms to download the remaining content.
"login.jsp" bottleneck likely to be server side processing, due to heavy CPU usage. This is common for dynamic pages which need to process before sending the response to the client. If the process involve database or other expensive operation, that could be cause of slow performance.
Situation is different for "pic.png", delay is likely due to slow network or poor configuration for the server hosting the image. As the image is not a dynamic content, it will not be consuming more CPU.
So how do you resolve these different situations?
In the case of objects with long TTFB times, like index.jsp, the solution often requires a software-level optimization. It could involve adding a database index, introducing some object-level caching, or a configuration change (such as database connection pooling). Be careful to fall in to the trap of throwing more hardware at the problem to solve these types of issues. While it might work in the short term, these issues almost always are due to sub-optimal software and throwing extra hardware at the problem will be like putting a band-aid on a bullet hole.
In the case of objects with relatively short TTFB times but overall long TTLB times, the solution is usually very different. While there may be a software solution, such as configuring Apache’s connections to be better optimized for the server it runs on, most of the time the root cause is due to network/hardware-related issues. Check with the ISP that hosts the server to confirm the max bandwidth throughput allowed. If the object response is slow during peak times but fast during off-peak times, it may need extra web servers (ie: hardware).
Alternatively, you might want to look at a Content Delivery Network (CDN) like CDNetworks to help host the objects in a physically closer location. For a low-cost CDN, check out Amazon’s CloudFront service, which can let you host images and other static objects in nine separate locations around the world. This is a great, low-cost solution for people who want to serve static content to many different geographies but don’t have the budget or desire to open mutliple data centers.
Source: browsermob
----
When you open any web page, there are series of requests from the client(usually a web browser) and response from the server.
TTFB (Time to First Byte) - Amount of time it took for the client (usually a web browser) to receive the first byte of server response.
TTLB (Time to Last Byte) - Amount of time it took for the client (usually a web browser) to receive the last byte of server response i.e total time taken to download an object
Both metrics are used in performance testing for analyzing the bottleneck.
I will explain the metrics by taking an example.
Both the requests appear to be hosted on two different servers.
Both the requests tool almost 1 second to complete, but "pic.png" having file size four times grater than the "login.jsp". What is going on? In-order to understand the complete story, we need to know TTFB.
With this additional information, we can understand what is happening.
In case of "login.jsp" after receiving the first byte, it took 09ms to download the remaining content.
Where as "pic.png" received the first byte very quickly but took another 900ms to download the remaining content.
"login.jsp" bottleneck likely to be server side processing, due to heavy CPU usage. This is common for dynamic pages which need to process before sending the response to the client. If the process involve database or other expensive operation, that could be cause of slow performance.
Situation is different for "pic.png", delay is likely due to slow network or poor configuration for the server hosting the image. As the image is not a dynamic content, it will not be consuming more CPU.
So how do you resolve these different situations?
In the case of objects with long TTFB times, like index.jsp, the solution often requires a software-level optimization. It could involve adding a database index, introducing some object-level caching, or a configuration change (such as database connection pooling). Be careful to fall in to the trap of throwing more hardware at the problem to solve these types of issues. While it might work in the short term, these issues almost always are due to sub-optimal software and throwing extra hardware at the problem will be like putting a band-aid on a bullet hole.
In the case of objects with relatively short TTFB times but overall long TTLB times, the solution is usually very different. While there may be a software solution, such as configuring Apache’s connections to be better optimized for the server it runs on, most of the time the root cause is due to network/hardware-related issues. Check with the ISP that hosts the server to confirm the max bandwidth throughput allowed. If the object response is slow during peak times but fast during off-peak times, it may need extra web servers (ie: hardware).
Alternatively, you might want to look at a Content Delivery Network (CDN) like CDNetworks to help host the objects in a physically closer location. For a low-cost CDN, check out Amazon’s CloudFront service, which can let you host images and other static objects in nine separate locations around the world. This is a great, low-cost solution for people who want to serve static content to many different geographies but don’t have the budget or desire to open mutliple data centers.
Source: browsermob
----
Tuesday, June 22, 2010
Web Application Security Test
Web Application Security Test
Definition: Application security is the use of software, hardware, and procedural methods to protect applications from external threats. Security measures built into applications and a sound application security routine minimize the likelihood that hackers will be able to manipulate applications and access, steal, modify, or delete sensitive data. Once an afterthought in software design, security is becoming an integral part of the design process.
Following are different tests to check the application security.
Data injection and manipulation attacks.
1. Reflected cross site scripting. (XSS).
2. Persistent XSS.
3. Cross site request forgery.
4. SQL Injection.
5. Blind SQL injection.
6. Buffer overflows.
7. Integer overflows.
8. Log injection.
9. Remote file include (RFI) injection.
10. Server side include (SSI) injection.
11. Operating command injection.
12. Local file include (LFI)
13. Parameter Redirection.
14. Auditing of redirect chains.
Sessions and authentications
1. Session strength.
2. Authentication attack.
3. Insufficient authentication.
4. Insufficient session expiration.
Server and general HTTP
1. AJAX auditing.
2. FLASH analysis.
3. HTTP header auditing.
4. Detection of client side technologies.
5. Secure sockets layer (SSL) certificate issues.
6. SSL protocol supported.
7. SSL ciphers supported.
8. Server misconfiguration.
9. Directory indexing and enumeration.
10. Denial of service.
11. HTTP response splitting.
12. Windows 8.3 file name.
13. DOS device handle DoS.
14. Canonicalization attacks.
15. URL redirection attack.
16. Password auto complete.
17. Custom fuzzing.
18. Path Manipulation - traversal.
19. Path truncation.
20. WebDEV auditing.
21. Web services auditing.
22. File enumeration.
23. Information disclosure.
24. Directory and path traversal.
25. Spam gateway detection.
26. Brute force authentication attack.
27. Known application and platform vulnerabilities.
Source: HP WebInspect.
One of the best site for understanding different threats, select this link.
List of tools available in the market.
OWASP Security Testing Tools Listing
HP WebInspect
IBM Rational AppScan
Powerfuzzer
SecPoint Penetrator
Netsparker
ZeroDayScan
Fortify 360
OWASP Security Testing Tools
Retina Web Security Scanner
Hailstorm
GamaSec
Wikto
Nikto Scanner
Acunetix Web Vulnerability Scanner
Defensics Core Internet Test Suite
Perimeter Check
Core Impact Pro
C5 Compliance Platform
Snort
SecurityMetrics Appliance
Nessus
Security Center
SARA
Qualys Free Security Scans
GFiLANguard
Qualys Guard
PatchLink Scan
Secure-Me
SAINT
NMap Network Mapper -
NetIQ Security Analyzer
Foundstone
CERIAS Security Archive
StopBadware Vulnerability Scanner list
----
Definition: Application security is the use of software, hardware, and procedural methods to protect applications from external threats. Security measures built into applications and a sound application security routine minimize the likelihood that hackers will be able to manipulate applications and access, steal, modify, or delete sensitive data. Once an afterthought in software design, security is becoming an integral part of the design process.
Following are different tests to check the application security.
Data injection and manipulation attacks.
1. Reflected cross site scripting. (XSS).
2. Persistent XSS.
3. Cross site request forgery.
4. SQL Injection.
5. Blind SQL injection.
6. Buffer overflows.
7. Integer overflows.
8. Log injection.
9. Remote file include (RFI) injection.
10. Server side include (SSI) injection.
11. Operating command injection.
12. Local file include (LFI)
13. Parameter Redirection.
14. Auditing of redirect chains.
Sessions and authentications
1. Session strength.
2. Authentication attack.
3. Insufficient authentication.
4. Insufficient session expiration.
Server and general HTTP
1. AJAX auditing.
2. FLASH analysis.
3. HTTP header auditing.
4. Detection of client side technologies.
5. Secure sockets layer (SSL) certificate issues.
6. SSL protocol supported.
7. SSL ciphers supported.
8. Server misconfiguration.
9. Directory indexing and enumeration.
10. Denial of service.
11. HTTP response splitting.
12. Windows 8.3 file name.
13. DOS device handle DoS.
14. Canonicalization attacks.
15. URL redirection attack.
16. Password auto complete.
17. Custom fuzzing.
18. Path Manipulation - traversal.
19. Path truncation.
20. WebDEV auditing.
21. Web services auditing.
22. File enumeration.
23. Information disclosure.
24. Directory and path traversal.
25. Spam gateway detection.
26. Brute force authentication attack.
27. Known application and platform vulnerabilities.
Source: HP WebInspect.
One of the best site for understanding different threats, select this link.
List of tools available in the market.
OWASP Security Testing Tools Listing
HP WebInspect
IBM Rational AppScan
Powerfuzzer
SecPoint Penetrator
Netsparker
ZeroDayScan
Fortify 360
OWASP Security Testing Tools
Retina Web Security Scanner
Hailstorm
GamaSec
Wikto
Nikto Scanner
Acunetix Web Vulnerability Scanner
Defensics Core Internet Test Suite
Perimeter Check
Core Impact Pro
C5 Compliance Platform
Snort
SecurityMetrics Appliance
Nessus
Security Center
SARA
Qualys Free Security Scans
GFiLANguard
Qualys Guard
PatchLink Scan
Secure-Me
SAINT
NMap Network Mapper -
NetIQ Security Analyzer
Foundstone
CERIAS Security Archive
StopBadware Vulnerability Scanner list
----
Friday, June 18, 2010
Performance Testing Configuration or Setup
Performance Testing Configuration or Setup.
Every organization has different configuration setup for conducting load tests, it is based on tool selected, hardware requirements, number of virtual users required etc.
I have classified the configurations into 7 types, I will explain those details below.
Assumption:
(1)In-House mean, you are hosting the server in your premises or through external dedicated servers, where physical hardware is in you control (Datacenter).
(2) Real or Remote users mean, actual users accessing the application through internet(Firewall) after deploying into production environment.
(3) Configuration or setup mean, conducting the tests based on the design and publishing the results.
(4) Standard server setup will have load balancer, web server(s), application server, DB server, Firewall.
(5) Cloud mean, Cloud computing.
I would like you to read following links, before reading the remaining presentation.
Performance Testing - On LAN and over the Internet (WAN).
What is cloud load testing?
I want to basically explain if the path of IP packets during testing and production is not the same, then users experience different response time.
Configuration - A
We have load test and server setup in the premises.
When real (or remote) users start accessing the site, application performance will not be as expected, as we have not tested the firewall, bandwidth and IP packet effects.
For more information, read above mentioned links.
Configuration -B
We have load test and server setup in the premises.
It resembles very realistic scenario as WAN Emulation is being used. But we can't guarantee 100% expected response time, when real or remote users start accessing the application, as we have not tested the firewall and internet connection.
Configuration -C
We have load test setup on cloud and server setup in the premises.
It resembles 100% realistic scenario, we can guarantee remote users experience expected response, as we have tested the entire infrastructure.
Note: We do have issues accessing server counters, need to open ports in the firewall.
Configuration -D
We have load Generators on cloud and controller, server setup in the premises.
It resembles 100% realistic scenario, we can guarantee remote users experience expected response, as we have tested the entire infrastructure.
Configuration -E
We have load test setup in the premises and server hosted on the cloud.
As applications are hosted on the cloud, it is not a best practice to perform a load test from your premises. Sending huge number of IP packets through firewall is costly and difficult to capture server counters from the cloud.
Configuration -F
We have controller in the premises and load generators and server hosted on the cloud.
You may have issues collecting the server counters data.
Configuration -G
We have load test setup and server hosted on the cloud.
As applications are hosted on the cloud, it is a best practice to perform a load test.
----
Every organization has different configuration setup for conducting load tests, it is based on tool selected, hardware requirements, number of virtual users required etc.
I have classified the configurations into 7 types, I will explain those details below.
Assumption:
(1)In-House mean, you are hosting the server in your premises or through external dedicated servers, where physical hardware is in you control (Datacenter).
(2) Real or Remote users mean, actual users accessing the application through internet(Firewall) after deploying into production environment.
(3) Configuration or setup mean, conducting the tests based on the design and publishing the results.
(4) Standard server setup will have load balancer, web server(s), application server, DB server, Firewall.
(5) Cloud mean, Cloud computing.
I would like you to read following links, before reading the remaining presentation.
Performance Testing - On LAN and over the Internet (WAN).
What is cloud load testing?
I want to basically explain if the path of IP packets during testing and production is not the same, then users experience different response time.
Configuration - A
We have load test and server setup in the premises.
When real (or remote) users start accessing the site, application performance will not be as expected, as we have not tested the firewall, bandwidth and IP packet effects.
For more information, read above mentioned links.
Configuration -B
We have load test and server setup in the premises.
It resembles very realistic scenario as WAN Emulation is being used. But we can't guarantee 100% expected response time, when real or remote users start accessing the application, as we have not tested the firewall and internet connection.
Configuration -C
We have load test setup on cloud and server setup in the premises.
It resembles 100% realistic scenario, we can guarantee remote users experience expected response, as we have tested the entire infrastructure.
Note: We do have issues accessing server counters, need to open ports in the firewall.
Configuration -D
We have load Generators on cloud and controller, server setup in the premises.
It resembles 100% realistic scenario, we can guarantee remote users experience expected response, as we have tested the entire infrastructure.
Configuration -E
We have load test setup in the premises and server hosted on the cloud.
As applications are hosted on the cloud, it is not a best practice to perform a load test from your premises. Sending huge number of IP packets through firewall is costly and difficult to capture server counters from the cloud.
Configuration -F
We have controller in the premises and load generators and server hosted on the cloud.
You may have issues collecting the server counters data.
Configuration -G
We have load test setup and server hosted on the cloud.
As applications are hosted on the cloud, it is a best practice to perform a load test.
----
Monday, June 14, 2010
HP LoadRunner in the Cloud – Beta
HP LoadRunner software in the Cloud – Beta
HP announced HP LoadRunner in the Cloud, a new application performance testing offering designed to help IT organizations easily and affordably optimize their website performance for changing business demands.
HP LoadRunner, the industry’s best-selling load testing software, is now available via Amazon Elastic Compute Cloud (Amazon EC2), making performance testing accessible to businesses of all sizes. This on-demand software gives clients a flexible “pay as you go” approach for performance testing of mission-critical applications and websites.
“The rise of cloud computing has brought the promise of infinite scalability for applications, but it has also brought a new set of challenges for developers and performance testers,” said Theresa Lanowitz, founder of analyst firm voke inc. “With HP’s LoadRunner in the Cloud, businesses can test, tune, analyze and optimize applications for the cloud, enabling clients to take advantage of cloud economics with flexible, pay-as-you-go pricing.”
For more details select this link hp Performance Testing to the Cloud
LoadRunner Cloud Beta
You need to send request to HP, beta participation is based on the approval.
---
HP announced HP LoadRunner in the Cloud, a new application performance testing offering designed to help IT organizations easily and affordably optimize their website performance for changing business demands.
HP LoadRunner, the industry’s best-selling load testing software, is now available via Amazon Elastic Compute Cloud (Amazon EC2), making performance testing accessible to businesses of all sizes. This on-demand software gives clients a flexible “pay as you go” approach for performance testing of mission-critical applications and websites.
“The rise of cloud computing has brought the promise of infinite scalability for applications, but it has also brought a new set of challenges for developers and performance testers,” said Theresa Lanowitz, founder of analyst firm voke inc. “With HP’s LoadRunner in the Cloud, businesses can test, tune, analyze and optimize applications for the cloud, enabling clients to take advantage of cloud economics with flexible, pay-as-you-go pricing.”
For more details select this link hp Performance Testing to the Cloud
LoadRunner Cloud Beta
You need to send request to HP, beta participation is based on the approval.
---
Sunday, June 13, 2010
Cloud load testing
Cloud load testing.
Cloud computing is Internet-based computing, whereby shared resources, software and information, are provided to computers and other devices on-demand, like the electricity grid.
In other words implementing virtualization concept in massive scale.
Advantages
1. You can easily access the cloud server using personal computer and put what ever the software you like.
2. Scalability - Increase or decrease the hardware based on the requirements. One or Two or Three or N servers available on demand.
3. Instant - You can immediately host the website.
4. Save Money - Pay for what you use.
Understanding cloud computing, cool video, worth watching it.
Some top cloud computing companies to watch.
1. Amazon Elastic Compute Cloud (Amazon EC2)
2. AT&T
3. Enomaly's Elastic Computing Platform (ECP)
4. Google
5. GoGrid
6. Microsoft
7. NetSuite
8. rackspace
9. Right Scale
10. salesforce
11. OpSource
For the past 16 years Mercury Interactive dominated the enterprise testing market with Loadrunner and QTP. Back in 1994, IT architecture was driven by client server model.
Now we are in the age of cloud computing, new generation architecture and technologies evolving faster than what we have imagined. In the testing space, after 16 years of domination, Mercury appears ready to relinquish its leadership position to a new breed of testing vendors.
What is cloud load testing?
There are companies that can simulate load for any number of users from any part of the globe using cloud testing services.
(1) Not required to buy own internal resources (Hardware, internet connection, routers...).
(2) Realistic scenarios, load is generated from different parts of the globe, entire infrastructure get tested (gateways, firewalls, routers, servers...)
(3) There is no limit to the number of users, unlimited power. It depends on the vendor license agreement.
(4) More savings, pay for what you use, when really required.
Some of the cloud load testing service sites:
Load testing from cloud, video by Webperformance tool.
Gomez
Platform Lab
Keynote
Browser Mob
Load Impact
Load Strom
HP - Beta
sauce LABS
Gomez
PushToTest
Performance Testing - On LAN and over the Internet (WAN).
---
Cloud computing is Internet-based computing, whereby shared resources, software and information, are provided to computers and other devices on-demand, like the electricity grid.
In other words implementing virtualization concept in massive scale.
Advantages
1. You can easily access the cloud server using personal computer and put what ever the software you like.
2. Scalability - Increase or decrease the hardware based on the requirements. One or Two or Three or N servers available on demand.
3. Instant - You can immediately host the website.
4. Save Money - Pay for what you use.
Understanding cloud computing, cool video, worth watching it.
Some top cloud computing companies to watch.
1. Amazon Elastic Compute Cloud (Amazon EC2)
2. AT&T
3. Enomaly's Elastic Computing Platform (ECP)
4. Google
5. GoGrid
6. Microsoft
7. NetSuite
8. rackspace
9. Right Scale
10. salesforce
11. OpSource
For the past 16 years Mercury Interactive dominated the enterprise testing market with Loadrunner and QTP. Back in 1994, IT architecture was driven by client server model.
Now we are in the age of cloud computing, new generation architecture and technologies evolving faster than what we have imagined. In the testing space, after 16 years of domination, Mercury appears ready to relinquish its leadership position to a new breed of testing vendors.
What is cloud load testing?
There are companies that can simulate load for any number of users from any part of the globe using cloud testing services.
(1) Not required to buy own internal resources (Hardware, internet connection, routers...).
(2) Realistic scenarios, load is generated from different parts of the globe, entire infrastructure get tested (gateways, firewalls, routers, servers...)
(3) There is no limit to the number of users, unlimited power. It depends on the vendor license agreement.
(4) More savings, pay for what you use, when really required.
Some of the cloud load testing service sites:
Load testing from cloud, video by Webperformance tool.
Gomez
Platform Lab
Keynote
Browser Mob
Load Impact
Load Strom
HP - Beta
sauce LABS
Gomez
PushToTest
Performance Testing - On LAN and over the Internet (WAN).
---
Saturday, June 12, 2010
Thursday, June 3, 2010
How single Internet connection shared with multiple PCs ?
How single Internet connection shared with multiple PCs ?
Have you ever wondered how a single home or office broadband internet line connected to multiple computers. Corporate office have more than one internet connection, that acts as a backup if any of the ISP is down, it is called Multi-Homing.
It is through NAT(Network Address Translation) we are able to connect multiple PCs to a single internet connection. NAT is implemented at ISP, corperate offices, home network by using routers or Wi-Fi devices.
To understand NAT in simple way, NAT is like the receptionist in a large office managing and connecting extensions for the phone calls coming from the board number(Office telephone). Let's say you have left instructions with the receptionist not to forward any calls to you unless you request it. Later on, you call a potential client and leave a message for them to call you back. You tell the receptionist that you are expecting a call from this client and to put them through.
Internet has grown larger than every one has imagined, as per the recent estimate there are 100 million hosts and 350 million users activity on the internet.
So what does the size of the internet do with the NAT.
An IP address (IP stands for Internet Protocol) is a unique 32-bit number that identifies the location of your computer on a network. Basically it works just like your street address: a way to find out exactly where you are and deliver information to you. Theoretically IPv4 can have 4,294,967,296 unique addresses (2 ^ 32). The actual number of available addresses is smaller (somewhere between 3.2 and 3.3 billion) because of the way that the addresses are separated into Classes and the need to set aside some of the addresses for multicasting, testing or other specific uses.
With the explosion of the Internet and the increase in home networks and business networks, the number of available IP addresses is simply not enough. The obvious solution is to redesign the address format to allow for more possible addresses. This is being developed IPv6 but will take several years to implement because it requires modification of the entire infrastructure of the Internet and support (2^128) unique address.
Advantages of NAT
1. Reduce the need of public addresses.
2. Extends the longevity of IPv4 by optimizing the current number of IP addresses.
3. Adds security by blanketing an entire network to appear as a single client.
Understand, Public and Private IP by selecting this.
In internet terminology all the communications are performed using Data Packets. Each packet consist of Destination IP, Sender IP, control information and data.
As your computer is assigned Private IP, others can't reply your request by taking "Sender IP" from the data packet that you have sent.
The NAT router translates traffic coming into and leaving the private network by storing the data inside the routing table.It basically alters the "Sender IP" address inside the data packet, in the same way it memorize and changes the inbound data packet "Destination IP" to what it has changed earlier.
IP masquerading, also called as Network address and port translation (NAPT), port address translation (PAT).
NAT - Flash Animation Demo. (Select GREEN and RED lights at the bottom of the video on both sides).
Understand, how data packets are sent through different hots to reach the destination server.
Probably your next question would be "Bharath why are you explaining network related stuff in your blog?".
Better having knowledge on networks and protocols for a performance test engineer, so that he can trouble shoot and create better test scenarios. It would be difficult to test, if you don't understand the underlying architecture.
----
Have you ever wondered how a single home or office broadband internet line connected to multiple computers. Corporate office have more than one internet connection, that acts as a backup if any of the ISP is down, it is called Multi-Homing.
It is through NAT(Network Address Translation) we are able to connect multiple PCs to a single internet connection. NAT is implemented at ISP, corperate offices, home network by using routers or Wi-Fi devices.
To understand NAT in simple way, NAT is like the receptionist in a large office managing and connecting extensions for the phone calls coming from the board number(Office telephone). Let's say you have left instructions with the receptionist not to forward any calls to you unless you request it. Later on, you call a potential client and leave a message for them to call you back. You tell the receptionist that you are expecting a call from this client and to put them through.
Internet has grown larger than every one has imagined, as per the recent estimate there are 100 million hosts and 350 million users activity on the internet.
So what does the size of the internet do with the NAT.
An IP address (IP stands for Internet Protocol) is a unique 32-bit number that identifies the location of your computer on a network. Basically it works just like your street address: a way to find out exactly where you are and deliver information to you. Theoretically IPv4 can have 4,294,967,296 unique addresses (2 ^ 32). The actual number of available addresses is smaller (somewhere between 3.2 and 3.3 billion) because of the way that the addresses are separated into Classes and the need to set aside some of the addresses for multicasting, testing or other specific uses.
With the explosion of the Internet and the increase in home networks and business networks, the number of available IP addresses is simply not enough. The obvious solution is to redesign the address format to allow for more possible addresses. This is being developed IPv6 but will take several years to implement because it requires modification of the entire infrastructure of the Internet and support (2^128) unique address.
Advantages of NAT
1. Reduce the need of public addresses.
2. Extends the longevity of IPv4 by optimizing the current number of IP addresses.
3. Adds security by blanketing an entire network to appear as a single client.
Understand, Public and Private IP by selecting this.
In internet terminology all the communications are performed using Data Packets. Each packet consist of Destination IP, Sender IP, control information and data.
As your computer is assigned Private IP, others can't reply your request by taking "Sender IP" from the data packet that you have sent.
The NAT router translates traffic coming into and leaving the private network by storing the data inside the routing table.It basically alters the "Sender IP" address inside the data packet, in the same way it memorize and changes the inbound data packet "Destination IP" to what it has changed earlier.
IP masquerading, also called as Network address and port translation (NAPT), port address translation (PAT).
NAT - Flash Animation Demo. (Select GREEN and RED lights at the bottom of the video on both sides).
Understand, how data packets are sent through different hots to reach the destination server.
Probably your next question would be "Bharath why are you explaining network related stuff in your blog?".
Better having knowledge on networks and protocols for a performance test engineer, so that he can trouble shoot and create better test scenarios. It would be difficult to test, if you don't understand the underlying architecture.
----
Subscribe to:
Comments (Atom)






























