Delivery Manager, Arctern
Abhinav is having 15 Years of Experience in Software Testing, Test Automation, Product Release Ma... more>>
Based on the type of requirement of the Performance Test, users are distributed for ramp up, peak load and ramp down. For example, for a 100 user test, there might be a requirement of starting the Test with 10 users, keep adding users (known as ramp up) every 1 minute till the full peak load of 100 users is reached (specifying the duration of peak load), and then ramp down (removing users).
As a thumb of rule, following are the performance Objectives which are typically captured from the Application Under Test
Application Response Time: This is the most fundamental parameter which is ideally the second nature of the performance tester. Application Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:
a. Response Time at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.
b. Response Time at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways. Two common approaches are time taken by the first byte to reach the client (time to first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.
By measuring latency, you can gauge whether your application takes too long to respond to client requests.
Transactions Per Second: Transactions per second are nothing but reflect the Application Throughput. The throughput varies largely due to the type of load applied. The various examples include credit card transactions, the number of concurrent users, think time configured and so on. A critical parameter here is the configuration of the Server where the Application is hosted and then the network connection. For example, in terms of numbers lets say, there are 1000 users with an average page request data of 5k for every 5 minutes. The throughput would be = 1,000 x (5x1024x8) / (5 x 60)
System Resource Utilization: Typically the following system parameters are measured in performance testing:
You can identify the resource cost on a per operation basis. Operations might include logging in to a job portal, searching for jobs, applying for one/multiple jobs. You can measure resource costs for a given user load, or you can average resource costs when the application is tested using a given workload profile.
A workload profile consists of an aggregate mix of users performing various operations. For example, for a load of 100 concurrent users (as defined below), the profile might indicate that 20 percent of users are logging on, 30 percent are applying for a job, while 50 percent recruiters are reviewing applications. This helps you identify and optimize areas that consume an unusually large proportion of server resources and response time.
Simultaneous users have active connections to the same Web site, whereas concurrent users hit the site at exactly the same moment. Concurrent access is likely to occur at infrequent intervals. Your site may have 100 to 150 concurrent users but 1,000 to 1,500 simultaneous users.
When load testing your application, you can simulate simultaneous users by including a random think time in your script such that not all the user threads from the load generator are firing requests at the same moment. This is useful to simulate real world situations.
However, if you want to stress your application, you probably want to use concurrent users. You can simulate concurrent users by removing the think time from your script.
Experts on QA
Latest postings by this author
Top Expert Articles