14. Performance Test 14.1 definition of performance test
Performance is an indicator of the degree to which a software system or component meets its timeliness requirements; second, performance is a feature of software products that can be measured in time. The timeliness of performance is measured by response time or throughput. Response time is the time required to respond to the request.
14.1.1 performance test for user perspective
For example, a typical Web application: The user's focus is on the software's response time to the user's actions.
This response time = render time + system response time.
14.1.2 performance test for Administrator perspective
Focus on the response time of the system. For system administrators, the time consumed by the user client is not considered.
Focus on the system response time, including network time-consuming, the server time-consuming and so on. It also focuses on system state, such as resource utilization, system scalability, system capacity, and system stability.
Performance testing of the 14.1.3 development perspective
Focus on how to improve the performance of software by adjusting design and code implementation, or by adjusting system settings. And how to identify and solve defects caused by multi-user access during software design and development. Performance is considered in terms of system architecture, database design, code quality, and so on.
14.2 Key terms for performance testing 14.2.1 response time
The time that is required to respond to the request. Response Time = render time + system response time.
which
(1) Rendering time: Depends on the time it takes to render the page when the data is received by the client.
(2) System response time: The time taken by the application system from the start of the request to the data received by the client.
From a design perspective, a better user experience is that the front end provides a progress bar or progressively displays data as it waits for data results.
Further decomposition of response time: Network transfer time + application delay time (Web server delay time +db delay time).
For response time, the standard varies. General page response time, 2 seconds is very attractive, 5 seconds is relatively good, 10 seconds is the limit of tolerance. Depending on the specific circumstances set.
14.2.2 Number of concurrent users
Number of system concurrent users: the number of users accessing the system at the same time. The maximum load capacity for the server. The focus is on the instantaneous maximum number of visits.
Number of business concurrent users: from the user's point of view, for a long period of time, there will be a basic fixed number of users access to the system.
which
(1) Number of system users: the total number of users using the system.
(2) Number of users online: number of simultaneous users.
There are two common ways to do concurrency testing:
(1) is in the case of concurrency, according to the business of different test (business one, how many people use together, when to start using, how long to use). This approach is more of a business concurrency test.
(2) In the case of a certain number of concurrent, only do the same operation (query, modify, add, delete). This approach is more of a system concurrency test.
Formulas for estimating concurrent users: c=nl/t
Where: C is the average number of concurrent users, N is the number of login sessions, and L is the average length of time for the login session; t is the length of time period for the expedition.
Peak concurrency Number: m=c+3*
Assume that the login session meets the Poisson distribution.
For example: OA system, a total of 3,000 users, every day about 400 users access, for a user, the Daily Online time is 4 hours, and the daily working time is 8 hours. The average concurrent user is c=400*4/8=200 and the concurrent user peak is 242.
The actual application process, to consider the time of the fine-grained or combined with business peaks and valleys to more accurately estimate the concurrent users.
The more general formula is: C=N/10, which is the average number of concurrent users with 10% per day access to the number of system users
Cm=r*c R is the adjustment factor, the value is generally
The log analysis of Web server can obtain more accurate maximum number of concurrent user accesses.
14.2.3 throughput
The number of customer requests that the system processes within a unit of time. Embody the performance carrying capacity of software system.
General Description: Number of Requests/sec or page Count/second.
Business perspective, number of visitors/days or transactions processed/hours.
Network angle: bytes/day = = Network traffic.
Role:
(1) To assist in the design of performance testing scenarios, and to measure whether the performance test scenario has achieved the desired design objectives;
(2) to assist in analyzing performance bottlenecks. For example, in bytes/sec mainly reflected by the network infrastructure, server architecture, Application server constraints; The number of clicks per second indicates the main subject of application server and application code.
When a performance bottleneck is not encountered, the formula is calculated: F=nvu*r/t.
Nvu represents the number of virtual users, and R represents the quantity of requests (clicks) issued per vu; t represents the test time.
14.2.4 Performance Calculator
Performance Calculator counter: Some data metrics that describe the performance of a server or operating system.
Resource utilization: The use status of various resources of the system.
14.2.5 Think Time
Think think time: From a business perspective, the interval between each request when the user is doing the operation.
There is a certain relationship between think time and number of iterations, number of concurrent users, and throughput.
Calculation formula: R=t/tt
R: The number of requests per user; T is the test time; TT is the think time.
General steps to calculate think time:
(1) First calculate the number of concurrent users of the system;
(2) Statistic the average throughput of the system
(3) Statistics on the average number of requests issued by each user
(4) Calculate the think time according to the above formula.
If the purpose of the test is to verify that the application has the expected ability to verify the capability, then try to simulate the user's true thinking time; if it is a more general study, understand the performance level of the system under pressure or understand the ability of the system to withstand stress, that is, planning ability, you can consider 0 think time.
14.3 Performance test Method 14.3.1 performance test (performance testing)
Simulate production operation pressure force and use scene combination, test the performance of the system to meet production performance requirements. Verify the capability of the system under certain operating conditions.
Identify user scenarios, give performance metrics to focus on, test execution, and test analysis.
Performance Objective Description: The system performs some kind of business operation under 100 concurrent users with a response time of less than 5 seconds.
14.3.2 load test (load testing)
By increasing the pressure on the system under test until the performance index. Find the processing limit of the system and provide data for the system tuning. Also known as the Measurement of scalability (Scalability testing).
Limit Description: Allows up to 120 concurrent users to access under a given condition or up to 2100 business within 1 hours under given conditions.
Expected performance metrics: response time not exceeding 10S or server average CPU utilization is less than 65%.
14.3.3 pressure test (Stress testing)
Test system in a certain saturation, such as CPU, memory and so on in the saturated use of the system can handle the ability of the session, and whether the system will be wrong.
(1) The main purpose is the performance of the application when the procuratorial system is under pressure. By increasing the access pressure, such as increasing the number of concurrent users, the application system resources use to maintain a certain level. Detect the performance at this time, there is no error message, response time and so on.
(2) by simulating the load and other methods, the use of the system to achieve a high level of resources. For example, set to "CPU 75%, Memory 70%" case, there is no error, the system response time. Other, such as JVM memory, number of DB connections, DB CPU, and so on.
(3) The stability of the test system. If a system can run stably for a period of time under pressure, the system should be able to achieve a satisfactory degree of stability under normal operating conditions.
14.3.4 configuration Test (config testing)
Adjust the software/hardware environment of the system under test, understand the degree of the influence of different environment to the system performance, and find the optimal allocation principle of various resources.
(1) Understand the degree of influence of various factors on the performance of the system to determine the most worthwhile tuning operation.
(2) In the Planning field, the evaluation of how to adjust to achieve the scalability of the system.
14.3.5 concurrency Test (Concurrency testing)
Test for deadlocks or other performance issues when multiple users concurrently access the same application, module, or data record by simulating concurrent access by the user.
(1) A problem is found in the system where concurrent access may be hidden.
(2) Focus on some performance indicators, such as memory leaks, deadlocks, long transactions, thread/process synchronization failures, and so on.
Concurrent testing verifies the reasonableness of a schema or design and can be used for code-level inspection and positioning. such as Jprofile, Jprobe tools.
14.3.6 Reliability Test (reliablity testing)
The test system can run stably under this condition by loading some business pressure to the system, such as the utilization of resources in 70-90%, to keep the application running for a period of time.
(1) Verify whether the system can support long-term stable operation
(2) It needs to run for a period of time under pressure.
(3) Pay attention to the system's operation status. Attention to memory, CPU, system response time has no obvious change.
14.3.7 Failure Recovery Test (Failover testing)
Designed for systems with redundant backup and load balancing. Used to verify that if a system fails locally, the user can continue to use the system and, if so, how much the user will be affected. Also concerned about how many users can be accessed and what contingency measures to take when the problem occurs. In general, such tests are only required for systems that have a clear requirement for the system's continuous operating indicators.
14.4 Performance Test Analysis Methods general steps for 14.4.1 performance testing
The general steps for performance testing are as follows:
(1) Setting goals and analysis systems
(2) How to select test metrics
(3) Related technologies and tools for learning
(4) Development of evaluation criteria
(5) Scenarios and use cases for design testing
(6) Running test Cases
(7) Analysis of test results
14.4.2 Evaluation method for performance testing 14.4.2.1 performance 80-20 principle method
Use the following example to illustrate.
200 users use the client software for business processing (concurrency of at least 200 concurrency), each year the total amount of business processed by the software is: 20 million business/year.
The following assumptions are used when testing strength estimates:
The annual business volume is concentrated in 10 months, 20 business days per month, 8 hours per working day;
Using the 80-20 principle, 80% of the business per day in 20% of the time to complete, that is, 80% of the daily business within 1.6 hours to complete;
The estimated results of the test pressure:
Last year, about 20 million operations were handled, of which 15% per cent of the business process required 3 requests to the application server, 70% per business processing required 2 requests to the application server, and 15% requests per business to the application server for each transaction. According to previous statistical results, the annual business increment of 15%, taking into account the needs of the next three years of business development, testing needs to be twice times the current volume of business.
The total number of requests per year is: (2000*15%*3+2000*70%*2+2000*15%*1) *2= "80 million times/year.
The number of requests per day is: 8000/200= "400,000 times/day.
The number of requests per second is: (400000*80%)/(8*3600*20%) = 55.6 times/second.
Under normal circumstances, the application server's ability to process requests should be at least 56 times/second.
14.4.2.2 Performance Descent Curve analysis method
The performance degradation curve actually describes the curve in which performance decreases as the number of users increases. The "performance" described here can be either response time or throughput or the number of clicks/sec data. Of course, in general, "performance" mainly refers to response time.
An example of a "response time descent curve" is given.
As you can see from Figure 1.6, a curve can be divided into the following sections:
(1) Single user area: Response time to a single user of the system. This is useful for establishing a reference value for performance.
(2) Performance flat area: The best performance you can expect without more performance tuning. This area can be used as a baseline or as a benchmark.
(3) Pressure area: Apply a "slight drop" place. The typical, maximum recommended user load is the start of the pressure zone.
(4) Performance inflection point: Performance begins "sharply declining" points.
These areas actually clearly identify the best range of system performance, the range of system performance that is starting to go bad, and the point at which the systems can fall sharply. For performance testing, these intervals and inflection points can be found where performance bottlenecks arise.
Therefore, for the performance degradation curve analysis, the main concern is the performance degradation curve of each interval and the corresponding inflection point, by identifying different intervals and inflection points, so as to provide a basis for performance bottleneck identification and performance tuning.
14.4.3 Performance Test LoadRunner test process
The performance testing process of LoadRunner is given. LoadRunner the performance testing process into 6 steps: Plan test, test design, create VU scripts, create test scenarios, run test scenarios, and analyze results.
The test process is as follows:
(1) Planning test phase
The main test requirements of the collection, the typical scenario of the determination;
(2) Test design phase
The design of test cases is mainly carried out;
(3) Create VU script phase
Based on the design of the use case to create a script;
(4) Creating a test scenario stage
The design and setup of the test scene are mainly carried out, including the setting of monitoring indicators;
(5) Run the test scenario phase
Execute the test scenarios that have been created and collect the corresponding data;
(6) Analysis result stage
The result analysis and report work is carried out mainly.
14.5 Performance Test Model PTGM
Performance test Model PTGM (performance testing general model), the performance test model divides the performance testing process into 6 steps, including pre-test preparation, test tool introduction, test planning, test design and development, test execution and management, and test analysis.
The performance test model PTGM as shown in.
In the description of the PTGM model, the author designs the appropriate activities for each step, covering all the processes from "Test team building" to "test analysis", with detailed activity guidelines and reference templates for each activity.
National Computer technology and software Professional technical qualification (level) exam "software Evaluator"-Exam content Summary (14) Performance test