Website Performance Optimization Metrics

Source: Internet
Author: User
Tags website performance

1. Response time

Refers to the time it takes to perform an operation, including the time it takes to start a request to receive the last response data. The response time is the most important performance index of the system, which can reflect the "speed" of the system intuitively.

The response time required for common system operations is:

    • Open a Web site: seconds

    • Querying a record in the database (with index): More than 10 ms

    • Mechanical Disk Primary Address positioning: 4 ms

    • Read 1MB data from a mechanical disk: 2 ms

    • Read 1MB data from SSD disk sequence: 0.3 ms

    • Read a data from a remote distributed cache Redis: 0.5 ms

    • Read 1MB data from memory: more than 10 μs

    • Java program Local method invocation: several microseconds

    • Network Transmission 2KB data: 1 microseconds

2. Concurrency number

Refers to the number of requests that the system can process simultaneously, which also reflects the load characteristics of the system. For Web sites, the number of concurrent users is the number of concurrent user requests that are submitted simultaneously.

3. Throughput

Refers to the number of requests processed by the system within a unit of time, reflecting the overall processing capacity of the system. For sites, it can be measured in "Requests/sec" or "pages per second", or in terms of "visitors/days" or "processed business hours". TPS (transactions per second) is a common metric for throughput, plus HPS (number of HTTP requests per second), QPS (number of queries per second), and so on.

In the process of increasing the number of concurrent systems (this process is accompanied by a gradual increase in server system resource consumption), the system throughput is gradually increased, and after reaching a limit, as the number of increase in volume decreases, the system crashes point, the system resources are exhausted, the throughput is 0.

The relationship between system throughput and system concurrency, as well as response time, can be visually understood as Highway traffic: Throughput is the number of vehicles per day through toll stations (which can be converted into toll stations for high-speed charges), concurrent numbers Is the number of vehicles on the highway, the response time is the speed. When vehicles are scarce. When vehicles are very small, the speed is very fast, but the high rate of receipt is correspondingly low; with the increase of the number of highway trucks, the speed is slightly affected, but the high-speed fee received increases rapidly, and as the vehicle continues to increase, the speed becomes more and more slow, and the expressway is getting more and more fast. If the flow of traffic continues to increase, exceeding a certain limit, any accidental factors will lead to all the high-speed paralysis, the car can not move, the cost of course, and the highway into a parking lot (resource exhaustion)

4. Performance Counters

It is a number of data metrics that describe the performance of a server or operating system. Includes metrics such as system Load, number of objects and threads, memory usage, CPU usage, disk and network I/O. These indicators are also important parameters of the system monitoring, set alarm thresholds for these indicators, when the monitoring system found that the performance counter exceeded the threshold, the operation and the development personnel to alarm, timely detection of processing system anomalies.

System load, which is the sum of the number of processes currently being executed by the CPU and waiting to be executed by the CPU, is an important indicator of the system's busy idle program. In the case of multicore CPUs, the perfect situation is that all CPUs are in use and no process is waiting to be processed. So the ideal value for load is the number of CPUs. When the load value is lower than the number of CPUs, it indicates that the CPU is idle and the resource is wasted; When the load value is higher than the number of CPUs, the process is queued for CPU scheduling, indicating that the system is running out of resources and affecting the execution performance of the application. Use the top command in a Linux system to view the value is three floating-point numbers representing the last 1 minutes, 10 minutes, 15 minutes of the average number of run queue processes.

Website Performance Optimization Metrics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.