10 tips for improving Web development performance, web development
With the rapid development of the network, the continuous improvement of network performance becomes the key to stand out in the yunyun App. The highly connected world means users have more stringent requirements on the network experience. If your website cannot respond quickly, or your App is delayed, you will soon feel your competitors. The following is a summary of 10 experiences on Performance Improvement for your reference:
QQ Group for front-end learning and communication: 461593224
1. Use the Reverse Proxy Server to accelerate and protect applications
Its role is mainly in the following three aspects:
- Load Balancing-the Load balancer running on the reverse proxy server balances the transmission among different servers. It allows you to add servers without any difference.
- Save static files-for direct file requests, sample files or code files, they can be directly stored on the reverse proxy server and then directly sent to users, in this way, you can quickly access the application server and reduce the load on the application server to improve program performance.
- Security Protection-the reverse proxy server can configure high security and identify and monitor threats.
2. Add a Server Load balancer instance
Adding a Server Load balancer to a website is a relatively simple change, but it can improve performance and security. The Load balancer is used for transmission and distribution between different servers.
The premise of implementing the Load balancer is that there is a reverse proxy server that sends requests to other servers after receiving Internet communication. The beauty of the balancer is that it supports two or more application servers and uses the selection algorithm to separate requests between servers.
3. cache static and dynamic content
The use of cache technology allows users to display content more quickly. Its processing policies include: to process content faster when requests are sent, and to store content on faster devices, or make the content closer to the user.
4. Data Compression
Compression Technology is a huge potential performance booster. Its main function is to compress images, videos, audios, and other files efficiently.
5. Optimized SSL/TLS access
Although SSL/TLS is becoming more and more popular, its impact on performance should also be paid attention. Its impact on performance is mainly reflected in two aspects:
- When a new connection is enabled, the initialization handshake is unavoidable, that is, the browser needs to use HTTP/1.x to establish a server connection each time.
- The encrypted data stored on the server will become larger and larger. After encryption, the user needs to decode the data when reading the data.
So how should we handle it?
- Session cache-use ssl_session_cache to directly cache parameters for creating new SSL/TLS connections
- Session ID-store the ID/ID of the specified SSL/TLS, but you can directly use it when creating a new connection, eliminating the hassle of re-establishing communication.
- OCSP stapling Optimization-crawls SSL/TLS authentication information to reduce the time for establishing communications.
6. Deploy HTTP/2 or SPDY
For websites with SSL/TLS enabled, once combined with HTTP/2 and SPDY, it can achieve strong performance cooperation; the result is that the establishment of a single connection requires only one communication handshake. SPDY and HTTP/2 use a single connection instead of multiple connections.
7. Regularly update the software version 8. Optimize Linux Performance
For example, configure or process Linux as follows:
Backlog queue
If you have some connections to be deactivated, consider adding net. core. somaxconn.
File descriptor
NGINX allows each connection to use up to two file descriptors. If your system serves multiple connections, you may need to increase the value of sys. fs. file_max.
Instantaneous Port
When used as a proxy, NGINX creates a temporary transient port for each upstream server. Therefore, you can increase the value of net. ipv4.ip _ local_port_range to increase the number of available ports.
9. Optimize Web server performance
Access log Optimization
In NGINX, the buffer = size parameter is added to access_log to write logs in the cache. With flush = time, the cached content can be written after a certain interval.
Cache
Enabling cache can make connection response faster.
Client active connection
Active connections can reduce the number of reconnections, especially when SSL/TLS is enabled.
Upstream active connection
Upstream connection refers to the connection to the program server, database server, and so on.
Restrict Resource Access
Appropriate policies to restrict resource access can improve performance and security.
Worker Processing
The Worker processing mode is the request-driven processing mode. NGINX uses an event-based model and OS dependency mechanism to efficiently distribute requests.
Perform socket table sharding
The Socket sub-table can create a socket listener for each worker processing. When the core delegates the connection to the listener, you can immediately know which processing is about to be executed, so that the processing process becomes concise.
Thread Pool Processing
Any computer thread may be suspended due to a single slow operation. For web server software, disk access is a performance bottleneck, such as data replication. When the thread pool is used for processing, some slow response operations can be put into a task group separately without affecting other operations.
10. perform real-time monitoring to quickly solve problems and bottlenecks
The Implementation of Real-time Monitoring allows you to fully understand the operating status of the system, identify problems and solve problems, and even identify the causes of performance bottlenecks or slow operation.
For example, you can monitor the following problems:
- Server down
- Connection access loss
- Serious server cache loss
- The server sent the wrong data.
QQ Group for front-end learning and communication: 461593224