JMeter is used to simulate attaching high loads to a server, network, or other object to test the compression capability of the service they provide, or to analyze the overall performance of the services they provide under different load conditions. Graph Results
Jmeter test results include: sample number, latest sample, average, deviation, throughput, median, and need to remember the meaning of these indicators.
-Sample number: Refers to the total number of requests made to the server during the testing process. In the case of success equals the number of concurrent requests you set
-Latest Sample: Indicates when the server responded to the most recent request.
-Throughput: Represents the number of requests processed by the server per minute.
-Average: The total elapsed time divided by the number of requests sent to the server;
-Deviation: Server response time change, the size of the measurement of discrete degree, or, in other words, the distribution of data.
-Median: The number of times in which half of the server response time is below the value and the other half is higher than the value. KPI for Aggregate
Hand-crafted test scripts, which require you to know the URL of the request and the parameters you carry, and so on, take too much time, so you can record the script with the Badboy tool. This tool is not open source, but can be used for free recording of a. Jmx script, easy to use.
The official website is: http://www.badboy.com.au/
Attachment:
For the processing of concurrent quantities:
50QPS below-Small web site
Simple small site, you can use the simplest way to quickly build, there is not much technical bottleneck in the short term, as long as the server is not too bad basically can meet.
50~100qps--db Limit Type
Most of the relational database each request can be controlled in 0.01 seconds or so, even if your site has only one DB request per page, then the page request can not guarantee in 1 seconds to complete 100 requests, this phase to consider doing cache or multiple db load. Regardless of that scenario, site refactoring is unavoidable.
300~800qps--Bandwidth Limit Type
At present, most servers use the "hundred gigabit bandwidth" provided by IDC, which means that the actual bandwidth of the website export is 8M byte or so. Assuming that each page is only 10K Byte, under this concurrency condition, the hundred gigabit bandwidth has been eaten. The first consideration is CDN acceleration/offsite caching, multiple machine load and other technologies.
500~1000qps--Intranet Bandwidth Limit +memcache limit type
Due to the characteristics of key/value, each page to the Memcache request more than the direct request to DB, Memcache pessimistic number in the 2w or so, seemingly very high, but in fact, in most cases, first of all, it is possible before the network bandwidth has been eaten, and then in the 8K QPS around the case, Memcache has shown instability, if the code does not have enough optimization, the pressure may be transferred directly to the DB layer, which eventually led to the entire system to achieve a certain threshold, performance quickly decline.
1000~2000qps--fork/select, lock mode limit type
Well, in a word: The threading model determines throughput. Regardless of what locks are most common in your system, file system access locks are a disaster at this level. This requires that the system can not exist in the central node, all the data must be distributed storage, data needs to be distributed processing. In short, key words: Distribution