Http://www.csdn.net/article/2015-10-20/2825960-web
With the rapid development of the network, the continuous improvement of network performance becomes the key to be able to stand out in the app. A highly connected world means that users have more stringent requirements for the Web experience. If your site does not respond quickly, or if your app is delayed, users will soon be empathetic to your competitors. The following is a summary of 10 experiences on performance improvement for reference:
1. Use a reverse proxy server (Reverse Proxy server) to accelerate and protect applications
Its function mainly in the following three aspects:
Load Balancing – A load balancer running on a reverse proxy server transfers balance between different servers. Through it, you can add to the server with no difference. Static files-for direct file requests, such as picture files or code files, can be stored directly in a reverse proxy server and sent directly to the user, enabling quick access and lightening the application server to improve the performance of the program. Security – A reverse proxy server can be configured for high security and to identify and monitor threats.
2. Add a Load Balancer
Adding a Load balancer to your Web site is a relatively simple change, but it can lead to good performance and security improvements. The function of load balancer is to transmit and distribute between different servers.
The load balancer is implemented with a reverse proxy server that sends related requests to other servers after it receives Internet traffic. The beauty of the equalizer is that it supports two or more application servers and uses a selection algorithm to split the requests between servers.
3. Caching static and dynamic content
The use of caching technology enables content to be presented to users more quickly, with the following strategies: processing content faster when demand is emitted, storing content on faster devices, or bringing content closer to the user.
4. Data compression
Compression technology is a huge potential performance accelerator. Its main role is reflected in the picture, video or audio files, can be efficient compression processing.
5. Optimize SSL/TLS access
Although Ssl/tls is becoming more and more popular, its impact on performance should also be taken seriously. Its impact on performance is mainly reflected in two aspects:
The initial handshake is unavoidable whenever a new connection is turned on, that is, the browser needs to use http/1.x to establish a server connection each time. The encrypted data stored on the server will be more and more large, and the user will need to decode it when they read it. So how to deal with it.
Session caching-using Ssl_session_cache to directly cache the parameter session ID of a new SSL/TLS connection-stores the identity/id of the specified SSL/TLS, but can be used directly to establish a new connection, eliminating the need to re-establish traffic. OCSP stapling optimization-to reduce the time to establish communications by capturing SSL/TLS authentication information.
6. Deployment of HTTP/2 or Spdy
For Web sites that have SSL/TLS enabled, once the combination of HTTP/2 and Spdy will be able to achieve a powerful combination of performance, because the result is that the establishment of a single connection requires only one communication handshake. The main features of Spdy and HTTP/2 are that they use a single connection rather than a multiparty connection.
7. Update the software version regularly
8. Optimize Linux performance
For example, configure or process Linux with the following:
Backlog queues
If you have some connections that will be deactivated, consider adding net.core.somaxconn.
File descriptor
Nginx allows up to two file descriptors per connection. If your system is serving multiple connections, you may want to consider increasing the value of Sys.fs.file_max.
Instantaneous port
When used as an agent, Nginx creates a temporary instantaneous (ephemeral) port for each upstream server. Therefore, you can try increasing the value of net.ipv4.ip_local_port_range to increase the number of available ports.
9. Optimize Web server Performance
Access Log Optimization
In Nginx, the Buffer=size parameter is added to the Access_log to implement the cached write of the log, and adding flush=time can realize the cache content writing after a certain time interval.
Cache
Enabling caching can make the connection response faster.
Client Active Connection
Active connections reduce the number of reconnection, especially when SSL/TLS is enabled.
Upstream Active connection
A upstream connection refers to a connection to a program server, a database server, and so on.
Restricting access to resources
Taking appropriate policies to restrict access to resources can improve performance and security.
Perform worker processing
The worker processing mode is the request-driven processing mode. Nginx uses an event-based model and OS-dependent mechanism to efficiently distribute requests.
To make a socket sub-table
The socket table creates a socket listener for each worker handle, and when the core delegation connection is assigned to the listener, you can immediately know which process is about to be executed, thus simplifying the processing process.
Thread pool Processing
Any computer thread may be suspended due to a single slow operation. For Web server software, disk access is a performance bottleneck, such as data replication operations. When working with a thread pool, you can put some slow-response actions into a task group individually, without affecting other operations.
10. Real-time monitoring to quickly solve problems and bottlenecks
The implementation of real-time monitoring, you can fully understand the operation of the system, find problems to solve problems, or even to identify the cause of performance bottlenecks or slow running.
For example, you can monitor the following issues:
Server downtime connection access lost server cache loss critical server sent incorrect data from: Nginx