Full Picture:
Test Purpose:
Test Range & Performance indicators:
The test differs from the production environment server configuration in the processing method:
Real-time CPU monitoring:
Real-time Memory monitoring:
Real-time Network monitoring:
Real-time Disk monitoring:
Almighty Command:
Process tracking commands under Linux:
Linux Monitoring commands:
Linux timed tasks:
The following from the Baidu Encyclopedia----software Performance test
Response time I identified the concept of "response time" as "the time required to respond to a request", and the response time as the main embodiment of software performance for the user's perspective. The response time is divided into two parts: render time and system response time. Where "render time" depends on the time it takes for the data to render the page after the client receives the response data, and "response time" refers to the time taken by the Java application Server to receive the data from the start of the request to the client. Software performance testing generally does not focus on "rendering time," because the rendering time is largely dependent on the client's performance. Here we do not use many of the concepts in software performance test definitions-"system response time" is defined as "the time that the application system takes from the start of the request to the last byte of data received by the client", and the reason for not using this standard is that Some programming techniques can be used to render when the data is not fully received, to reduce the response time that the user feels, and for the HNDLZCGLXT project, we use the former standard for the C/s system, and we still use the latter standard for B/s. Number of concurrent users I distinguish between "concurrent users" and "simultaneous online numbers," The standard of my "concurrent users" is that the number of concurrent users depends on the target business scenario of the test object, so before determining the number of concurrent users, the user's business must first be decomposed and analyzed in a typical business scenario ( That is, the most frequently used and most concerned business operations), and then obtain "concurrent users" based on the scenario using some methods (mathematical models and formulas that compute the number of concurrent users). The reason for this is: Suppose an application system, the peak of 500 people at the same time online, but these 500 people are not concurrent users, because it is assumed that at a point in time, 50% of the people are filling in complex forms (filling out the form action on the server without any burden, only in "Submit" When the action is the pressure on the server system, 40% of people continue to jump from one page to another page (keep sending requests and responses, generate server pressure), and 10% people hanging online, no operation in a daze:) (no pressure on the server action). So only 40% of the people really pressure the server, from the example here, the number of concurrent users is concerned about not but the number of business concurrent users, but also depends on the business logic, business scenarios. Therefore we need the sixth part of this article software performance test documentation 4, 5, 6.
Software Performance Testing Technology tree (II)----Linux server performance