Sample analysis of LoadRunner performance test

Source: Internet
Author: User
Tags http 200 time 0

Sample analysis of LR performance test results

Test results Analysis

LoadRunner Performance Test Results analysis is a complex process, usually from the results summary, concurrency number, average transaction response time, number of hits per second, business success rate, system resources, Web page subdivision diagram, Web server resources, database server resources and other aspects of analysis, 1-1 shows. An important principle of performance test result analysis is the requirement indicator of performance test. We review the purpose of this performance test, as listed in the indicators, the requirements of this test is to verify in 30 minutes to complete 2000 user login system, and then take attendance business, and finally exit, in the business operation process of the page response time of not more than 3 seconds, and the server CPU utilization, Memory usage is not more than 75%, 70%, then according to the process shown, we began to analyze, to see if the test has reached the expected performance indicators, and what are the performance risks, how to solve.

Figure 1-1 Performance Test Results analysis flowchart

Results Summary

LoadRunner after the scene test results are collected, a summary of the results is displayed first, as shown in 1-2. The summary lists scenario execution, Statistics Summary (Summary of Statistics), Transaction Summary (transaction Summary), and HTTP responses Summary (HTTP response summary). List the results of this test with a brief information.

Figure 1-2 Performance Test Results summary diagram

Scenario Execution Status

This section gives the name of the test scenario, the result storage path, and the duration of the scene, as shown in 5-3. From this figure we know that this test starts from 15:58:40 and ends at 16:29:42, with a total duration of 31 minutes and 2 seconds. is basically consistent with the time we designed in our scenario execution plan.

Figure 1-3 Description of scenario execution

Statistics Summary (Summary of statistical information)

This section gives statistical values for the number of concurrent scenarios, total throughput, average throughput per second, total requests, and average requests per second, as shown in 5-4. From this figure we learned that the maximum concurrency for this test run is 7, the total throughput is 842,037,409 bytes, the average throughput per second is 451,979 bytes, the total number of requests is 211, 974, the average request per second is 113.781, for the throughput, the greater the throughput per unit of time, the better the server processing, and the number of requests only represents the number of requests made to the server by the client, and throughput is generally proportional to the relationship.

Figure 1-4 Summary Chart of statistics

Transaction Summary (transaction summary)

This section gives the average response time, pass rate, etc. of the relevant action after the execution of the scene, as shown in 1-5. From this graph we get the average response time of each action and the business success rate.

Attention:

Because each action is executed as a transaction in the "Miscellaneous" option of "Run-time Settings" in the scene, the transaction here is actually the action in the script.

Figure 1-5 Transaction Summary diagram

HTTP Responses Summary (HTTP response summary)

This section shows the status of each HTTP request being sent out during the execution of the scene, whether it is a success or failure, as shown here in 5-6. As you can see, during this test, LoadRunner has made 211,974 requests (consistent with total Hits in the statistics summary), where "HTTP 200" is 209,811 times, and "HTTP 404" has 2163. Note that during this process, most of the requests that have been made will respond correctly, but some fail, but do not affect the test results, "HTTP 200" indicates that the request was properly responded to, and "HTTP 404" indicates that the file or directory could not be found. Some friends may ask, there are 404 of errors Here, why the results have been passed. The reason for this is that the request content of some pages of the script is not critical, such as the possibility of requesting previous cookie information and, if not, re-acquiring it without affecting the final test results.

Figure 1-6 HTTP Response Summary

The usual HTTP status codes are as follows:

400 the request cannot be resolved.

401.1 Unauthorized: Access is denied due to invalid credentials.

401.2 Unauthorized: Access is denied because the server configuration tends to use an alternate authentication method.

401.3 Unauthorized: Access is denied because the ACL is set on the requested resource.

401.4 Unauthorized: The filter authorization failed to install on the WEB server.

401.5 Unauthorized: ISAPI/CGI application authorization failed.

401.7 Unauthorized: Access is denied because of a URL authorization policy on the WEB server.

403 Forbidden Access: Access is denied.

403.1 Forbidden: Execution access is denied.

403.2 Forbidden: Read access is denied.

403.3 Forbidden: Write access is denied.

403.4 Forbidden: You need to view the resource using SSL.

403.5 Forbidden: You need to use SSL 128 to view the resource.

403.6 Forbidden: The IP address of the client is denied.

403.7 Forbidden: SSL client certificate required.

403.8 Forbidden: The DNS name of the client is denied.

403.9 Forbidden: Too many clients try to connect to the WEB server.

403.10 Forbidden: The WEB server is configured to deny execution access.

403.11 Forbidden: The password has changed.

403.12 Forbidden: The server certificate mapper denied client certificate access.

403.13 Forbidden: The client certificate has been revoked on the WEB server.

403.14 Forbidden: The directory list has been rejected on the WEB server.

403.15 Forbidden: The WEB server has exceeded the Client Access License limit.

403.16 Forbidden: The client certificate is malformed or not trusted by the WEB server.

403.17 Forbidden: The client certificate has expired or is not yet valid.

403.18 Forbidden: Unable to execute the requested URL in the current application pool.

403.19 Forbidden: The CGI cannot be executed for the client in this application pool.

403.20 Forbidden: Passport Login failed.

404 file or directory not found.

404.1 file or directory not found: The Web site cannot be accessed on the requested port.

It is important to note that the 404.1 error only occurs on computers that have multiple IP addresses. If a client request is received on a particular IP address/port combination and the IP address is not configured to listen on that particular port, IIS returns a 404.1 HTTP error. For example, if a computer has two IP addresses and only one of the IP addresses is configured to listen on port 80, any requests received from port 80 by another IP address will cause IIS to return a 404.1 error. This error should only be set at this service level, because it is returned to the client only if multiple IP addresses are used on the server.

404.2 the file or directory could not be found: The lock policy prohibits the request.

404.3 the file or directory could not be found: the MIME mapping policy prohibits the request.

405 The HTTP action used to access the page is not licensed.

406 The client browser does not accept the MIME type of the requested page.

The 407 WEB server requires initial proxy authentication.

410 file has been deleted.

412 The prerequisites for client settings fail when evaluated on the WEB server.

414 The request URL is too large, so the URL is not accepted on the WEB server.

500 Server Internal error.

500.11 Server Error: The application on the WEB server is shutting down.

500.12 Server Error: The application on the WEB server is restarting.

500.13 Server Error: The WEB server is too busy.

500.14 Server Error: Invalid application configuration on the server.

500.15 Server error: Direct request for GLOBAL is not allowed. Asa.

500.16 Server Error: UNC authorization credentials are incorrect.

500.17 Server Error: The URL authorization store could not be found.

500.18 Server Error: The URL authorization store cannot be opened.

500.19 Server Error: The file's data is not configured correctly in the configuration database.

500.20 Server Error: The URL authorization domain could not be found.

500 100 Internal Server error: ASP error.

501 The configuration specified by the caption value is not executed.

An invalid response was received by the 502 Web server as a gateway or proxy server.

Concurrency number Analysis

"Running vusers (number of concurrent runs)" shows the execution of concurrent numbers during the execution of a scene. They display the state of the VUser, the number of VUser to complete the script, and the collection statistics, which are used in conjunction with the transaction graph to determine the impact of the number of VUser on the transaction response time. Figure 1-7 shows the vusers operation during the performance test of the OA system, we can see that the vusers trend is the same as the setting in our scenario execution plan, indicating that during the execution of the scene, the vusers is running according to our expected settings. There are no vuser running errors, so it is right to explain from another side that our parameterization is correct, because using a unique number for parameterized settings will cause VUser to run incorrectly if set incorrectly. In the script we added a code like this:

if (atoi ("{num}") > 0) {lr_output_message ("Login successful, continue execution."); Else{lr_error_message ("Login failed, exit test"); return-1;}

The code above means that if the login fails, you exit the iteration of the script, so what could cause the login to fail? is the parameterized setting in front of us, once the vuser is not assigned to the correct login account, it may cause login failure, which causes VUser to stop running. Therefore, from the performance of Figure 5-7, it can be considered that parameterization is not a problem.

Figure 1-7 The number of concurrent graphs running

We also use the collection point in the test script, then we can also look at the performance of the collection point during the execution of the scene, click on the left "New graph", appear in Figure 5-8, expand the "Vusers" before the plus, double-click "Rendezvous", after the collection point of the graph, click "Close , close the Add new diagram interface.

Figure 1-8 Adding a collection point chart

As shown in Figure 1-9 of the rendezvous point, you can see that all users are released immediately after they reach the meeting point. It is consistent with the set point policy setting "All running users are released after". Assuming such a situation, running vusers has 10, the set point policy setting is "all running users to release", and the collection point graph display maximum release vusers is 7, then it means some vuser timeout, The reason for the timeout may be that the response timed out by the VUser, which can be combined with the average transaction response time to analyze the cause in detail.

Figure 1-9 Assembly Point State diagram

Our test running vusers is consistent with the rendezvous point, indicating that the execution of concurrent users is correct during the execution of the whole scenario, and that the OA system test server can handle the business operations of 7 concurrent users.

Response time

In the performance test requirements we know that there is an indicator is required to login, Attendance business Operations page response time of less than 3 seconds, then this test is to meet this requirement? Let's take a look at "Average Transaction Response Time (average transaction response times graph)" (Figure 1-10), which is synthesized by the average transaction response time and the "Transaction Summary" in the result summary.

Figure 1-10 Average transaction response time graph

From the lower part of the graph, we can see that the corresponding action of the login section is "Submit_login", and the corresponding action of the attendance business submission is "Submit_sign", their "Average time (average response times)" Respectively is 4.425 seconds and 0.848 seconds, from these two values, the Attendance service transaction response time 0.848 seconds less than the expected 3 seconds, reached the requirements, and the login is 4.425 seconds, greater than the expected 3 seconds, does not meet the requirements. This result is not correct, because in the statistics of the login business, we do not remove the thinking time, so, the actual transaction time of the login function should be 4.425 seconds-3 seconds = 1.425 seconds, less than the expected 3 seconds, the transaction response time of the login business also reached our requirements. In peacetime performance testing activities, the statistical results need to remove the thinking time, coupled with the thinking time is to real simulation user environment, the statistical results of the elimination of thinking time is to more realistically reflect the server's processing power, the two are not contradictory.

After reading "Average time", we look at "Percent Times", which, to a certain extent, more accurately measured the actual situation of each transaction in the testing process, representing 90% of the transaction, the server's response is maintained near a certain value, "Average" The value of the average transaction response time change trend is not accurate statistics, such as three time: 1 seconds, 5 seconds, 12 seconds, the average time is 6 seconds, and another situation: 5 seconds, 6 seconds, 7 seconds, the average time is 6 seconds, obviously the second more stable than the first. Therefore, when we look at the average transaction response time, first look at the overall curve trend, if the overall trend is relatively smooth, no fluctuations in the fluctuation, take "Average time" and "Percent time" can be, if the overall trend is irregular, volatility is very large, we do not have " Average time "and using" Percent time "may be more realistic.

As you can see from figure 5-10, the trend for all action average transaction response times is very smooth, so the difference between using "Average time" and "Time Percent" is not very large, either. Here is the most commonly used statistical method "Percent time". Login to the business of "Percent time" is 5.298 seconds-3 seconds (think time) = 2.298 seconds, the attendance of the business of "Percent Times" is 1.469 seconds, no thinking, then it is real deal. According to the above calculation, the test results are recorded as shown in table 1.

Table 1 Test Results comparison One

Number of hits per second

The Hits per Second (hits per second) reflects the number of requests that the client submits per second to the server, and if the number of requests made by the client is greater, the relative "Average Throughput (Bytes/second)" should be greater, And the more requests that are made will have an impact on the average transaction response time, the three are often analyzed during testing. Figure 1-11 shows the "Hits per Second" and "Average Throughput (Bytes/second)" Composite diagram, it can be seen that the curves of both graphs are normal and basically consistent, indicating that the server can accept the client's request in a timely manner, and can return the results. If "Hits per Second" is normal, and "Average throughput (Bytes/second)" is unhealthy, it means that the server can accept requests from the server, but the results are slower, possibly with slow program processing. If "Hits per Second" is not normal, then there is a problem with the client, the problem is usually caused by the network, or the recorded script has a problem, failed to correctly impersonate the user's behavior. Specific problem specific analysis, here are only a few suggestions.

Figure 1-11 Number of hits per second vs. throughput composite graph per second

For this test, "Hits per Second" and "Average Throughput (Bytes/second)" are normal, and the overall performance is good.

In general, these two indicators for performance tuning, such as given a few conditions, to detect another condition, measured by these two indicators, often play a very good effect. For example, to compare the pros and cons of a two-hardware platform, you can use the same configuration method to deploy software systems, and then use the same script, scene design, statistical methods to analyze, and finally a better configuration.

Business success Rate

The "Business success rate" indicator is mentioned in many systems, such as telecommunications, finance, enterprise resource management and so on. For example, we downstairs of the CCB, if the daily business category is this: 20 accounts, 5 sales, 300 deposits, 500 withdrawals, 100 remittances, etc., then in doing their business system testing needs to consider the business success rate, generally not less than 98%. What does the specific business success rate mean? Eliminate those complex business, such as the asynchronous processing of the business (mobile card opening is asynchronous), business success rate is the transaction success rate, the user generally put a aciton as a business, in the LoadRunner scene execution in a transaction called a transaction. Therefore, the business success rate is actually the success of the transaction, the meaning of pass rate. In "Transaction Summary" We can clearly see the execution status of each transaction, as shown in 1-12.

Figure 1-12 Transaction Status Chart

As can be seen, all the Aciton are green, that is, passed, in addition to Vuser_init and Vuser_end two transactions, the other transactions through the number of 2163, also indicates that in 30 minutes of time, a total of 2,163 login attendance business operations. Then according to these can judge the test log on business and attendance business success rate is 100%, re-update the test Results record table as shown in table 2.

Table 2 Test Results Comparison Chart II

system resources

The system Resource Graph shows the utilization of machine system resources monitored during the scene execution, and generally monitors the CPU, memory, network, disk and other aspects of the machine. This test monitors the CPU usage and memory usage of the test server, as well as the processor queue length, as shown in Figure 1-13.

Figure 1-13 Test Server System resource monitoring results diagram

It can be seen that the CPU utilization, available physical memory, CPU queue Length Three indicators of the curve to tease more smooth, the average value of the three are: 53.582%, 83.456M, 8.45, and the test server total physical memory is 384M, then memory utilization is (384-83.456)/ 384=78.26%, according to this performance test requirements: CPU utilization is not more than 75%, physical memory usage is not more than 70% of these two points, the memory utilization rate of 78.26% is greater than the expected 70%, so memory usage is not standard. Based on the interpretation of Windwos resource performance indicators, in general, if "Processor queue Length" has been more than two, it may indicate a processor blockage, The number we are monitoring here is 8.45, and the overall balance is maintained, then it is inferred that the CPU of the test server can also be a bottleneck. At the same time in the test process, the scene execution to 23.5, reported the error! Reference source not found. Error, meaning that the monitored server is currently no longer able to access the counter data, so the monitoring of the operating system resources only get the first 23.5 of the scene execution data. This has a certain impact on the test results.

After obtaining the above data, the most recent test results record table is shown in table 3.

Table 3 Test Results Comparison chart three

From the above table data, the test overall has reached the expected performance indicators, but from other data, such as CPU queue length, memory usage, the hardware resources of the tested server needs to be improved.

Web Subdivision Map

The page breakdown chart can evaluate whether the page content affects transaction response time. Using a Web page segment, you can analyze problematic elements on your site (such as downloading very slow images or links that can't be opened).

Let's look at "page Download time Breakdown" in the Web subdivision, click Error! Reference source not found. "New graph" on the left, appears in Figure 1-14, expand the plus sign before "Web page diagnostics", double-click "Page Download time Breakdown" to appear "page Download Time Breakdown" After monitoring the chart, click the "Close" button to close the Add Monitor diagram interface.

Figure 1-14 Adding a Web page breakdown chart

In the list of monitoring charts, we see Figure 1-15, and we see that on all pages, the personal face page "http://192.168.0.52:8080/oa/oa.jsp" has the longest download time.

Figure 1-15 Page Download Time Breakdown chart

Figure 1-16 Details the time distribution consumed by each page, as shown in table 4 for each of the indicator's meanings. The table is provided by the LoadRunner user manual. Through the data of these indicators, we can easily determine which page, which request causes the response time to become longer, even response failure.

Figure 1-16 oa.jsp page download time distribution map

Table 4 Page Download time Breakdown indicator description

For this test, from the page subdivision diagram, basically each page load time is expected, oa.jsp page because the integration of the user's personal work platform, need to retrieve a lot of data, and synthesized a lot of pictures, so the corresponding loading time is longer, this is correct.

Web server resources

All of the above monitoring graphics loadrunner can be provided, but for some test monitoring charts, LoadRunner is not provided, expect their new version to support these features, of course, want to monitor Tomcat, JBoss or other Web server can sitescope tools, This tool is more complex to configure, depending on your needs. I am monitoring tomcat here using a trial version of ManageEngine applications Manager 8, which concludes with a JVM utilization rate of 1-17 for Tomcat.

Figure 1-17 Tomcat JVM Usage Monitoring graph

We can clearly see that the JVM usage of Tomcat is rising, the configuration of Tomcat is allocated a total of about 100M of physical memory to it, the initial testing of the JVM used relatively less, our test scenario is from 15:58:40, to 16:29:42, It lasted 31 minutes and 2 seconds. See, from 16:00 to 16:30 this time, that is, during the test scenario execution, the JVM usage is rising, and does not have a balanced state after the request has reached equilibrium, so, from this point can be inferred, if the test scenario continues to execute, or increase the number of concurrency, It will eventually result in Tomcat memory not being enough to report an "out of memory" overflow error. Under normal circumstances, the use of memory should be in line with "hit per Second", "Average Throughput (Bytes/second)" and other monitoring charts of the graphics trend is consistent.

From the above process can draw a conclusion, the problem in figure 1-17, there may be two reasons:

1, Tomcat memory allocation is insufficient;

2, the program code has errors, may lead to memory leaks.

Workaround:

1. Allocate more memory for Tomcat, and if you are using CATALINA.SH or Catalina.bat launched Tomcat, you can add "SET catalina_opts=-xms300m–xmx300m" to these two files. If Tomcat is started using the Winnt service mode, you can enter regedit in run to enter the registry, and then in the Hkey_local_machine-->software-->apache software Foundation-->process Runner 1.0-->tomcat5-->parameters "Modifies two properties, one is Jvmms and the other is jvmmx,1-18.

2, check the program code, using some memory leak check tool for inventory.

Figure 1-18 Modifying the JVM data for Tocat

Database server resources

Database server resource monitoring is relatively complex, and now commonly used data are MySQL, SQL Server, Oracle, DB2, etc., LoadRunner provides monitoring methods for the following databases, but does not provide a corresponding monitoring method for MySQL. He doesn't provide, we'll find our own monitoring tools, I use Spotlight, the tool to monitor the database is the advantage of simple configuration connection, not only can monitor the database, but also monitor the operating system resources, monitoring results are straightforward. Error! Reference source not found. Shows the execution of SQL statements in MySQL database during the execution of the scene, and you can see that the trend of "selects (query)" and "Inserts (insert)" statements are smoother during the execution of the scene and there are no errors found in the test. It also indicates that MySQL processing is normal when dealing with the related business. If either of these two SQL statements is in a very volatile situation, it is possible to introduce a page fault during the execution of the scene, because these statements do not execute, indicating that some pages are not loaded or that some functions are not being used. In this test, the OA system "oa.jsp" page has a large number of "selects (query)" statement, and attendance operation is "inserts (insert)", so as long as there is a problem, it is necessary to indicate that the test process there is no page open or not successful error.

Reprint: http://www.uml.org.cn/Test/201503064.asp

Sample analysis of LoadRunner performance test

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.