Performance test Steps
Performance test Step (i)-familiar with the application
This is one of the most critical steps in the entire performance process, no doubt.
We must understand: the architecture of the application
Take the type of application I'm familiar with as an example. With the application architecture in view, we know what we need to emulate: a generic HTML static file request, a generic servlet and JSP request, anAJAX request, a remote call request, and so on.
We must understand: the functional logic of the application
Performance test Steps (ii)-Test requirements
The test requirements we get are often described as:
This system can support 1 million UV (log in to the system every day).
The implication is: according to the current hardware performance and quantity, the system can support 1 million of UV.
However, we are aware of metrics such as throughput, response time, etc.
Throughput: The number of requests that the system can process per second, which represents the system capacity from the server's perspective
Response Time: The time required to return from a request to the first byte, which represents the system response speed from the user's perspective.
So, ask the development colleague: can the test requirements be translated into our familiar throughput and response time?
。。。。
The answer is often negative.
How to do: only by our experience, the 1 million UV into a series of indicators.
Response time: According to some foreign data, the response time of general operation can not be higher than 3-5 seconds; important operations, such as the response time of the checkout operation, cannot be higher than 15 seconds.
Throughput: Can be estimated based on similar products already on-line. Alternatively, an estimate is made using the 80/20 principle. We often use the 80/20 principle.
Although the response time and throughput metrics are already available, the test requirements are still unclear.
What is the purpose of our testing?
is verifying that the current hardware and software configurations support 1 million Uvs?
Is testing the current hardware and software configuration to support the maximum number of Uvs?
Is it helping development to find performance bottlenecks?
Although the response time and throughput metrics are already available, the test requirements are still unclear.
What is the purpose of our testing?
is verifying that the current hardware and software configurations support 1 million Uvs?
Is testing the current hardware and software configuration to support the maximum number of Uvs?
Is it helping development to find performance bottlenecks?
The answer is often: all!
According to our experience, the need for development is often the same (of course, development is generally not so detailed,^_^):
First, please verify that you can support 1 million UV.
If you can't support it, look for a performance bottleneck.
After the main performance bottleneck is resolved, estimate how many Uvs can be supported, and if not to 100w, estimate how many machines to add.
If you can support 1 million, please re-pressure, to see the time to achieve 3 million UV, the performance of the system.
Such a refinement, the requirements are basically clear.
Performance test Steps (iii)-Test Preparation
Test preparation includes testing client machine preparation, test data preparation, and test script preparation.
Client machine:
To be sufficient, otherwise, if the bottleneck is on the client, the service side cannot be evaluated.
To maintain network connectivity with the server, otherwise, if the bottleneck is in the network, also cannot evaluate the service side. Including:
Network bandwidth higher than server throughput
Network bandwidth should be stable.
Test data
If the tested feature involves databases and caches, it is often difficult to set a large amount of data to highlight performance bottlenecks.
If it is already on-line, the data can be copied from the online copy, and if it is not online, it needs to be structured like the amount of data on the line.
For example, to test group chat performance, we first need to register a large number of users, and then add the test users to the chat group.
The script for testing data preparation is sometimes more than the test script itself.
There is no way to construct large data volumes, and if you want to test the cache, we sometimes reduce the cache by the amount of data to make the test results as accurate as possible.
Test scripts
Grinder script is implemented with Jython
Test script implementations tend to take a long time
Because of the details of the application implementation, the need to communicate with the development to complete. This is one of the reasons why you need to understand the application architecture.
About Sleep time
Based on realistic simulation considerations,sleep time is still as realistic as possible and gives a certain deviation.
However, for test clients,sleep time often causes many client-side test threads to be dispatched, wasting client system resources.
The smaller the sleep time, the greater the throughput the client can simulate, so we tend to set the sleep time to 0 in the actual test .
Performance test Steps (iv)-Test Execution
In the execution of the test, you need to monitor the performance of the test client and server to monitor the server-side application:
Client system resources (CPU,io,memory)
Server-side system resources (CPU,io,memory)
JVM Operation of the server
Service-side application, see if there are anomalies
Metrics such as response time, throughput, etc.
system resources monitoring,Linux can be used under the tools are:vmstat,top,meminfo and so on.
JVM monitoring, can be used jprofiler tools,Linux under the jmap,jhat and so on.
Response time, throughput, etc., provided by Grinder.
The above information, usually after the end of the test, need to be archived, detailed analysis of the following
We developed our own set of scripts to obtain the Vmstat and top outputs of the test client and server at a fixed frequency, the log of the grinder, and to capture useful information to save for subsequent
Analysis
After each test run, you will definitely add a lot of data, you need to consider the impact of this execution on the amount of data, if the amount of data changes on subsequent testing will affect, you need to clean up the data.
Performance test Steps (v)-Test analysis
Test analysis is generally closely related to test monitoring, in the process of testing, with a variety of monitoring tools to see the state of the system operation, and timely detection of problems.
The common problems are:
Memory issues
The problem of limited resources competition
Memory issues
Looking at the memory footprint of Tomcat from top, this is not allowed and needs to be viewed with a dedicated memory analysis tool.
Tools:Jmap,jhat,Jstat, can get memory snapshots, get the details of heap memory.
The garbage collection configuration can affect system performance, and if the amount of memory block generation and destruction is large, you can see that the system throughput changes periodically with garbage collection.
In theory, thereis a memory leak in Java, but we have not found this in the application being tested.
However, in some system architectures, memory becomes a bottleneck. For example, we have tested the chat system, each long connection needs to occupy 5 m of memory, then a 10G memory server can only maintain 2000 long connections.
Competing issues with shared resources
There is a lot of competition for limited resources, such as a shared object in the service layer, such as a database connection, such as a data table with a high frequency in the database.
A shared resource can only be obtained by one thread at a point in time, and other threads must wait, which can easily cause many threads to timed the wait state. With the Jprofiler tool, you can get thread snapshots and analyze the improvement methods.
Performance testing experience Exchange -contingency issues
As with general functional testing, there are occasional problems with performance testing.
Faced with this problem, we need to unleash the revolutionary spirit of the testers and trace them to the end. The factors we often find are as follows:
Changes in external factors, such as a few tests, sometimes good, sometimes bad, and no rules to follow. Finally found that the network is caused by instability. The request returns a change. Sometimes the content of the second request depends on the first return information (that is, the so-called "association"), which is generally implemented through the parse of string, which is generally not very reliable, and when the return changes, it can be an error.
Application server if it is a cluster, one user requests that a server be returned correctly, but if another user is replaced, the server may not have the user's information, so it returns an error.
Performance testing Experience Exchange -client concurrency
The test client is bound to start multiple threads to simulate high concurrency, so there must be a thread concurrency problem. Like what:
When parameterized, an array of stored parameters is a shared object. If you want each thread to read a different parameter for Each loop, the update to the array subscript needs to be aware of concurrency issues.
For example, if you are calling System.out in a script , you also need to be aware that this is also a shared object, and that if you call System.out too much, it will cause the thread to wait, causing the client to degrade performance.
Performance testing Experience Exchange -Testers
Because of the wide range of performance testing, the requirements for testers are very high. I think the performance testers should develop the following capabilities:
As mentioned earlier, a thorough understanding of the application architecture.
Communication skills, testing in the process, we must cultivate diligent with the development of communication awareness, in order to improve work efficiency.
The ability to solve problems, in the script or test execution process, will encounter a lot of problems. First of all, do not be afraid, first consider the possible causes of the problem, and then step by step positioning, verification. Of course, this process requires a constant accumulation of experience such as commissioning.
"Go" performance test steps