4th instantaneous response: The high-performance architecture of the website

Source: Internet
Author: User
Tags browser cache website performance

Website performance is an objective indicator, can concretely reflect the response time, throughput and other technical indicators, but also subjective feelings, and feel is a specific participant related to the subtle things, the user's feelings and engineers feel different, different user experience is different.

4.1 Website Performance Testing

Performance testing is a prerequisite and basis for performance optimization, as well as an inspection and measurement of performance optimization results. There are different standards for website performance in different viewing angles, and there are different optimization methods.

4.1.1 website performance from different perspectives

1. Website performance from a user perspective
From the user's point of view, the website performance is the user in the browser intuitively feel the site response speed or slow. The main optimization means: Optimize the page HTML style, take advantage of the concurrency and asynchronous features of the browser side, adjust the browser cache policy, use CDN Service, reverse proxy and so on.

2. Developer Perspective on website Performance
From the developer point of view, the main concern is the performance of the application itself and its related subsystems, including response latency, system throughput, concurrency processing power, system stability and other technical indicators. Main optimization means: Use cache to speed up data reading, use clustering to improve throughput, use asynchronous message to speed up request response and achieve peak shaving, improve program performance by using code optimization method.

3. Website performance from the OPS staff perspective
From the perspective of operations personnel, focus on infrastructure performance and resource utilization, such as network operator bandwidth capability, server hardware configuration, data Center network architecture, server and network bandwidth resource utilization. The main optimization means: The construction optimizes backbone network, uses the cost-effective custom server, uses the virtualization technology optimizes the resource utilization and so on.

4.1.2 Performance Test Indicators

1. Response Time
Refers to the time it takes an app to perform an operation, including the time it takes to start from the request to the last response data received.

2. Concurrency number
The system is able to process the number of requests simultaneously, and also reflects the load characteristics of the system.

3. Throughput
Refers to the number of requests processed by the system within a unit of time, reflecting the overall processing capacity of the system. such as: TPS (transactions per second) is a common quantitative standard for throughput, HPS (number of HTTP requests per second), QPS (number of queries per second), and so on. In the process of increasing the number of system concurrency (the process is accompanied by a gradual increase in server system resource consumption), the system throughput is gradually increased, to reach a limit, as the number of concurrent increase decreases, when the system crashes, the system resources are exhausted, the throughput is zero.

In this process, the response time is to maintain a small rise, reached the throughput limit, rapid rise, to the system crash point, the system loses its response.

4. Performance Counters
Some data metrics that describe the performance of a server or operating system. Includes the system Load, the number of objects and threads, memory usage, CPU usage, disk and network IO, and other indicators.

System load, which is the sum of the number of processes currently being executed by the CPU and waiting to be executed by the CPU, is an important indicator of how busy the system is. In the case of multicore CPUs, the perfect situation is that all CPUs are in use and no process is waiting to be processed. So the ideal value for load is the number of CPUs. When the load value is lower than the number of CPUs, the CPU is idle, the resource is wasted, and when the load value is higher than the number of CPUs, it indicates that the process is queued for CPU scheduling, indicating that the system is running out of resources and affecting the execution performance of the application. Use the top command in your Linux system to view.

4.1.3 Performance test method

Performance Test : The system design initial planning performance indicators as the expected goal, the system constantly exert pressure to verify that the system within the acceptable range of resources, can achieve performance bottlenecks.
Load Test :
Stress test :
Stability Test :

4th instantaneous response: The high-performance architecture of the website

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.