Guide |
Improving the performance of Web applications has never been more important than it is now. The share of the network economy has been growing, and the value of the global economy over 5% is generated on the Internet (see data below). This moment of online hyper-connected world means that users ' expectations are also at the highest point in history. If your site doesn't respond in a timely fashion, or your app doesn't work without delay, users will quickly go to your competitors. |
For example, a study done by Amazon ten years ago can prove that even at that time, when the page load time is reduced by 100 milliseconds, the revenue increases by 1%. Another recent study highlighted the fact that more than half of site owners surveyed admitted that they were losing users because of application performance issues.
How fast does the website really need to be? For page loading, 4% of users are discarded for each additional 1 seconds. Top-level ecommerce site pages can be loaded in 1 seconds to 3 seconds during the first interaction, and this is the speed at which maximum comfort is provided. It's clear that this stakes are high and growing for Web applications.
It's easy to improve efficiency, but it's hard to see the actual results. In order to help you on your quest, this article will provide you with 10 tips to improve your website's performance up to 10 times times. This is a series of first articles on improving application performance, including fully tested optimization techniques and a little NGINX help. This series also gives the potential to improve the safety of the help.
TIP #1: Improve performance and increase security with reverse proxies
If your Web application is running on a single machine, this approach can significantly improve performance: just a faster machine, better processors, more memory, faster disk arrays, and so on. Then the new machine will be able to run your WordPress server, node. JS programs, Java programs, and other programs faster. (If your program is going to access the database server, the solution is still simple: add two faster machines and use a faster link between the two computers.) )
The problem is that machine speed may not be a problem. Web programs run slowly often because computers are always switching between different tasks: Accessing files from disk, running code, and so on, through thousands of connections and user interaction. Application servers may jitter thrashing-such as insufficient memory, swapping memory data to disk, and having multiple requests waiting for a task to complete, such as disk I/O.
You can take a completely different approach to replacing the upgrade hardware: Add a reverse proxy server to share some of the tasks. The reverse proxy server is located at the front end of the machine running the application and is used to handle network traffic. Only the reverse proxy server is directly connected to the Internet, and the application server communication is done through a fast internal network.
Using a reverse proxy server frees the application server from waiting for the user to interact with the Web program, so that the application server can focus on building the Web page for the reverse proxy server so that it can be transferred to the Internet. The application server does not need to wait for the client to respond, and it can run at a speed close to the optimized performance level.
Adding a reverse proxy server can also provide flexibility for your Web server installation. For example, if a certain type of server is overloaded, you can easily add another same server, and if a machine goes down, it can easily replace a new one.
Because of the flexibility of the reverse proxy, the reverse proxy is also a necessary prerequisite for some performance acceleration functions, such as:
- Load Balancing (See Tip #2) – load Balancing runs on a reverse proxy server and is used to distribute traffic evenly to a batch of applications. With the right load balancing, you can add an application server without having to modify the app at all.
- Caching static Files (See Tip #3) – Directly read files, slices or client code that can be saved in a reverse proxy server and then sent directly to the client, which can increase the speed and share the load of the application server, allowing the application to run faster.
- Web site Security – reverse proxy servers can improve site security and quickly detect and respond to attacks to ensure that the application server is protected.
NGINX software is specially designed for use as a reverse proxy server , and it also contains many of the above functions. NGINX handles requests in an event-driven manner, which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as app health checks, designed to handle request routing, advanced Buffering, and related support.
NGINX Worker Process helps increase application performance
Tip #2: Add load Balancing
Adding a Load Balancer server is a fairly simple way to improve performance and site security. Instead of getting the core Web servers larger and stronger, use load balancing to distribute traffic to multiple servers. Even if the program is poorly written, or has difficulty in scaling, the user experience can be improved only by using a Load balancer server.
The Load Balancer server is first a reverse proxy server (see tip #1)-it accepts traffic from the Internet and then forwards the request to another server. In particular, the Load Balancer server supports two or more application servers and uses the allocation algorithm to forward requests to different servers. The simplest method of load balancing is rotation round robin, and each new request is sent to the next server in the list. Other methods of replication equalization include sending requests to the server with the fewest active connections. NGINX Plus has the ability to assign a specific user's session to the same server.
Load balancing can improve performance because it avoids overloading a server and other servers have no traffic to handle. It can also simply expand the size of the server because you can add multiple servers with relatively inexpensive prices and make sure they are fully utilized.
The protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, scgi, Uwsgi, memcached, and several other types of applications, including TCP-based applications and other 4th-tier protocols. Analyze your web app to determine which ones you want to use and where you're not performing enough.
The same server or server farm can be used for load balancing, as well as other tasks such as SSL Terminal servers, support for client http/1.x and HTTP/2 requests, and caching of static files.
Tip #3: Caching static and dynamic content
Caching can improve the performance of Web applications by accelerating the speed of content transfer. It can take several strategies: preprocess the content to be transferred when needed, save the data to a faster device, store the data closer to the client, or combine these methods.
There are two different kinds of data caches:
- Static content caching . Infrequently changing files, like (JPEG, PNG) and code (CSS,JAVASCRIPT), can be saved on a perimeter server so that it can be quickly extracted from memory and disk.
- dynamic content caching . Many web apps generate a new HTML page for each page request. By simply caching the generated HTML content in a short period of time, you can reduce the amount of content you want to generate, and the pages are new enough to meet your needs.
For example, if a page is browsed 10 times per second, you cache it for 1 seconds, and 90% of the requested pages are extracted directly from the cache. If you cache static content separately, even the newly generated pages may be composed of these caches.
The following are three major caching technologies invented by Web applications:
- shorten the network distance between the data and the user . Reduce the transfer time by placing a copy of the content closer to the user's node.
- increase the speed of the content server . Content can be saved on a faster server to reduce the time to extract files.
- remove data from the overloaded server . Machines often cause a task to be executed at a slower rate than the test results because of some other task being done. Caching data on different machines can improve the performance of both cached and non-cached resources, because the host is not overused.
The caching mechanism for Web apps can be implemented inside the Web application server. First, caching dynamic content is used to reduce the time that the application server loads dynamic content. Second, caching static content, including temporary copies of dynamic content, is a further contribution to the application server load. and the cache is then moved from the application server to a faster, closer machine to the user, reducing the pressure on the application server and reducing the time it takes to extract data and transfer data.
The improved caching scheme can greatly improve the speed of the application. For most Web pages, static data, such as large image files, makes up more than half of the content. Without caching, this can take a few seconds to extract and transfer such data, but it takes less than 1 seconds to complete after caching.
To give an example of how caching is used in practice, Nginx and Nginx Plus use two instructions to set the caching mechanism:Proxy_cache_path and proxy_cache. You can specify the location and size of the cache, the maximum amount of time the file is saved in the cache, and some other parameters. Using the third (and quite popular) directive Proxy_cache_use_stale, if the server that provides fresh content is busy or hangs up, you can even let the cache provide older content so that the client does not get nothing. From a user's point of view this can be a good way to improve the time available for your website or app.
NGINX Plus has a premium cache feature, including support for cache cleanup and display of cache status information on the dashboard.
To get more information about Nginx's caching mechanism, you can browse the "References " and "Nginx content Caching"in the Nginx Plus Administrator's Guide.
Note : The caching mechanism is distributed between the application developer, the investment decision maker, and the actual system OPS personnel. Some of the complex caching mechanisms mentioned in this article are valuable from a DEVOPS perspective: Engineers who are integrated with the functionality of application developers, architects, and operations operators can meet their needs for site functionality, response time, security, and business results, such as the number of transactions completed.
Tip #4: compressing data
Compression is an accelerating method with great potential to improve performance. There are now a number of well-designed and high-compression standards for files such as photos (JPEG and PNG), Video (MPEG-4), and Music (MP3). Each of these standards reduces the size of the file more or less.
Text data-including HTML (containing plain text and HTML tags), CSS and code, such as javascript--, are often transmitted uncompressed. Compressing such data can have a greater impact on the perception of application performance, especially on slow or restricted mobile networks.
This is because text data is often an effective way for users to interact with a Web page, and multimedia data may be more of a support or decorative function. Intelligent content compression reduces bandwidth requirements for HTML,JAVASCRIPT,CSS and other text content, typically reducing bandwidth by 30% or more and corresponding page load times.
If you use SSL, compression reduces the amount of data that needs to be SSL-encoded, which consumes some CPU time and offsets the reduction in compressed data.
There are many ways to compress text data, for example, in HTTP/2, the compression mode of the novel text adjusts the head data in particular. Another example is the ability to open in NGINX using GZIP compression. Once you have compressed the text data in your service, you can use the gzip_static instruction directly to process the compressed. GZ version.
Tip #5: Optimizing SSL/TLS
The Secure Sockets (SSL) protocol and its next-generation Transport Layer Security (TLS) protocol are being used by more and more Web sites. SSL/TLS encryption of data sent from the original server to the user improves the security of the site. Part of the reason for this trend is that Google is using SSL/TLS, which is a positive factor in search engine rankings.
Despite the increasing popularity of SSL/TLS, the impact of using encryption on speed has also deterred many websites. SSL/TLS causes the site to become slower for two reasons:
- The handshake process for any one connection is required to pass the key at the first connection. Browsers that use the HTTP/1.X protocol repeat the above for each connection when they establish multiple connections.
- Data in the transmission process requires constant server-side encryption , client decryption .
To encourage authors using SSL/TLS,HTTP/2 and SPDY (described in the next chapter) to design a new protocol that allows a browser to use only one connection to a single browser session. This will greatly reduce the time wasted on the first of these reasons. However, there are ways to improve the performance of applications that use SSL/TLS to transmit data.
The Web server has a corresponding mechanism to optimize the SSL/TLS transmission. For example, NGINX uses OpenSSL to run on normal hardware to provide near-dedicated hardware transmission performance. NGINX SSL performance has detailed documentation, and the time and CPU utilization of SSL/TLS data is reduced by a lot.
Further, refer to this article for more details on how to improve the performance of SSL/TLS, which can be summarized as a few points:
- session buffering . Use the directive Ssl_session_cache to cache the parameters used by each new SSL/TLS connection.
- The session ticket or ID. Keeping the SSL/TLS information in a ticket or ID can be reused smoothly without having to shake hands again.
- OCSP Segmentation . Reduce handshake time by caching SSL/TLS certificate information.
Nginx and Nginx Plus can be used as the SSL/TLS server to handle the encryption and decryption of client traffic while communicating with other servers in clear-text mode.
Tip #6: Using HTTP/2 or SPDY
For sites that already use SSL/TLS, HTTP/2 and SPDY can improve performance better because each connection requires only one handshake. For sites that do not use SSL/TLS, HTTP/2 and SPDY from a responsive point of view will leave no pressure on migrating to SSL/TLS (which would otherwise reduce efficiency).
Google started to recommend SPDY as a faster protocol than http/1.x in 2012. HTTP/2 is the standard currently adopted by the IETF and is based on SPDY. SPDY has been widely supported, but will soon be replaced by HTTP/2.
The key to SPDY and HTTP/2 is to use a single connection instead of a multi-channel connection. A single connection is reused, so it can carry multiple requests and response shards at the same time.
By using a single connection, these protocols can avoid establishing and managing multiple connections like in a browser that implements http/1.x. A single connection is particularly effective with SSL because it minimizes the handshake time when SSL/TLS establishes a secure link.
The SPDY protocol requires SSL/TLS, while the HTTP/2 official standard is not required, but currently all browsers that support HTTP/2 can use it only if SSL/TLS is enabled. This means that browsers that support HTTP/2 only enable HTTP/2 if the Web site uses SSL and the server receives HTTP/2 traffic. Otherwise, the browser will use the http/1.x protocol.
When you implement SPDY or HTTP/2, you no longer need those regular HTTP performance optimizations, such as split by Domain , resource aggregation , and image flattening . These changes can make your code and deployment easier and easier to manage.
NGINX Supports SPDY and HTTP/2 for increased Web application performance
As an example of supporting these protocols, Nginx has supported SPDY from the outset, and most websites that use the SPDY protocol are running NGINX. Nginx also early support for HTTP/2, from September 2015 onwards, open-source version of Nginx and Nginx Plus support it.
Over time, we NGINX want more sites to fully enable SSL and migrate to HTTP/2. This will improve security while also finding and implementing new optimizations that simplify code performance.
Tip #7: Upgrade software version
An easy way to improve application performance is to select your software stack based on the evaluation of software stability and performance. Further, because developers of high-performance components are more likely to pursue higher performance and resolve bugs, it is worth using the latest version of the software. The new version is often more concerned by the developer and user community. Newer versions tend to take advantage of new compiler optimizations, including tuning new hardware.
A stable new version usually has better compatibility and higher performance than the older version. Software updates keep the software up-to-date, so you can maintain optimal optimization, resolve bugs, and improve security.
Always using legacy software will also prevent you from taking advantage of new features. For example, the above mentioned HTTP/2, currently requires OpenSSL 1.0.1. 1.0.2 will be required in the mid-2016 and it was released in January 2015.
Nginx users can start migrating to Nginx's newest open source software or nginx Plus; they all contain the latest capabilities, such as the socket and thread pool (see below), which are already optimized for performance. Then take a good look at your software stack and upgrade them to the latest version you can upgrade to.
Tip #8: Linux system Performance Tuning
Linux is the operating system used by most Web servers, and as a basis for your architecture, Linux obviously has many possibilities for performance. By default, many Linux systems are set up to use very few resources to match typical desktop applications. This means that Web applications require some fine tuning to achieve maximum performance.
The Linux optimizations here are specifically for Web servers. In NGINX, for example, there are some changes that need to be emphasized when accelerating Linux:
- buffer Queue . If you have a pending connection, then you should consider increasing the value of Net.core.somaxconn, which represents the maximum number of connections that can be cached. If the connection limit is too small, then you will see the error message, and you can gradually increase this parameter until the error message stops appearing.
- The file descriptor . NGINX uses up to 2 file descriptors for a connection. If your system has many connection requests, you may need to increase the Sys.fs.file_max to increase the overall limit on the number of file descriptors in order to support increasing load requirements.
- temporary port . When using a proxy, NGINX creates a temporary port for each upstream server. You can set Net.ipv4.ip_local_port_range to increase the range of these ports and increase the available port numbers. You can also reduce the timeout of inactive ports to reuse ports, which can be set by Net.ipv4.tcp_fin_timeout , which can quickly increase traffic.
Tip #9: Web server performance Tuning
No matter which Web server you use, you need to optimize it to improve performance. The following recommended methods can be used for any Web server, but some settings are for NGINX. Key optimization measures include:
- access logs . Do not write the log of each request directly back to disk, you can cache the log in memory and then bulk write back to disk. For Nginx, adding a parameter to the instruction access_log buffer=size allows the system to write the log to disk if the cache is full. If you add the parameter flush=time , the cached content will be written back to disk every once in a while.
- Cache . The cache stores a partial response in memory until it is full, which can make communication with the client more efficient. A response that doesn't fit in memory is written back to the disk, which reduces performance. When NGINX is enabled for caching, you can use Directives proxy_buffer_size and proxy_buffers to manage the cache.
- The client is KeepAlive . KeepAlive connections can reduce overhead, especially when using SSL/TLS. For NGINX, you can increase the maximum number of connections starting from the default value of keepalive_requests 100, so that a client can request multiple times on a specified connection, and you can also increase the keepalive_ The value of timeout allows the keepalive connection to survive longer, which allows subsequent requests to be processed more quickly.
- upstream KeepAlive . Upstream connections-that is, connections to applications servers, database servers, and other machines-also benefit from connection keepalive. For upstream connections, you can increase keepalive, which is the number of idle keepalive connections per worker process. This can increase the number of times a connection is reused and reduce the number of times a new connection needs to be reopened. More information on keepalive connections can be found in this "HTTP keepalive connection and performance".
- restrictions . Restricting the resources used by clients can improve performance and security. For NGINX, instruction limit_conn and limit_conn_zone limit the number of connections for a given source, while limit_rate limits bandwidth. These restrictions can prevent legitimate users from picking up resources and avoiding attacks. Directives Limit_req and limit_req_zone restrict client requests. For upstream servers, server directives in the upstream configuration block can use the max_conns parameter to limit the number of connections to the upstream server. This avoids overloading the server. The associated Queue instruction creates a queue to hold a specific number of requests within a specified length of time when the number of connections reaches the Max_conns limit.
- worker Process . The worker process is responsible for processing the request. NGINX uses an event-driven model and operating system-specific mechanisms to effectively distribute requests to different worker processes. This recommendation is recommended to set worker_processes for each CPU one. The maximum number of worker_connections (default 512) can be increased on most systems as needed, experimentally finding the best value for your system.
- Socket Segmentation . Typically a socket listener assigns a new connection to all worker processes. Socket splitting creates a socket listener for each worker process, so that when the socket listener is available, the kernel assigns the connection to it. This can reduce lock contention and improve the performance of multicore systems by adding the reuseport parameter to the listen directive to enable socket separation.
- Thread pool . The computer process may be occupied by a single, slow operation. For Web server software, disk Access can affect many faster operations, such as computing or copying in memory. After using the thread pool, slow operations can be assigned to different sets of tasks, and the master process can run fast operations all the time. When the disk operation is complete, the result is returned to the main process loop. In NGINX there are two operation--read () system calls and Sendfile ()--assigned to the thread pool
Thread Pools help increase application performance by assigning a slow operation to a separate set of tasks
skill . When changing the settings of any operating system or support service, change only one parameter at a time and then test the performance. If the changes are causing the problem, or you can't make your system faster, then change back.
TIP #10: Monitor system activity to solve problems and bottlenecks
The key to making the system very efficient in application development is to monitor the performance of your system in the real world. You must be able to monitor program activity through specific devices and your Web infrastructure.
Monitoring activity is the most positive-it tells you what's going on and leaves the problem to your discovery and eventual resolution.
Monitoring can reveal several different issues. They include:
- Server downtime.
- The server is out of a problem and is losing connectivity.
- A large number of cache misses have occurred on the server.
- The server did not send the correct content.
The application's overall performance monitoring tools, such as New Relic and Dynatrace, can help you monitor the time it takes to load a webpage from the remote, and NGINX can help you monitor the application's delivery side. When you need to consider adding capacity to your infrastructure to meet your traffic needs, application performance data can tell you that your optimizations really work.
To help developers find and solve problems quickly, NGINX Plus adds application-aware health checks-a comprehensive analysis of recurring routine events and alerts you when problems arise. NGINX Plus also provides session filtering, which prevents new connections from being accepted until the current task is completed, and another feature is slow start, allowing a server that recovers from errors to catch up with the progress of the load-balanced server farm. When used properly, the health check lets you find it before the problem becomes serious and affects the user experience, while session filtering and slow start can allow you to replace the server, and the process does not negatively affect performance and uptime. Displays the built-in NGINX Plus module real-time activity monitoring dashboard, including server farm, TCP connection and cache information and other WEB schema information.
Use real-time application performance monitoring tools to identify and resolve issues quickly
Summary: See the effect of 10 times times performance boost
These performance-boosting scenarios are available and work well for any Web application, and the actual results depend on your budget, the time you can spend, and the gaps in your current implementation. So how do you achieve 10 times-fold performance gains for your own applications?
To guide you through the potential impact of each optimization approach, here is a key point for each of the optimization methods detailed on the upper I side, although your situation is certainly very different:
- reverse proxy server and load balancer . No load balancing or poor load balancing can result in intermittent performance lows. Adding a reverse proxy, such as NGINX, prevents the Web application from fluctuating between memory and disk. Load Balancing can transfer the task of an overloaded server to an idle server and can be easily scaled up. These changes can have a huge performance boost, which can easily be 10 times times better than the worst performance of your current implementations, and may not be much better for overall performance, but there are substantial improvements.
- caches both dynamic and static data . If you have an overburdened Web server, there is no doubt that your application server will be able to improve performance by caching dynamic Data for up to 10 times times in peak time. Caching of static files can improve performance several times.
- compress the data . Using the media file compression format, the MP3 can greatly improve performance compared to format JPEG, graphics format PNG, video format MPEG-4, and music file format. Once these are used, then compressing the file data can increase the initial page load speed by up to twice times.
- optimize SSL/TLS. Security handshakes can have a huge impact on performance, and optimizations for them can result in a twice-fold increase in the initial response, especially for sites with large amounts of text. Optimizing the media files under SSL/TLS will only produce a small performance boost.
- use HTTP/2 and SPDY. When you use SSL/TLS, these protocols can improve the performance of your entire site.
- tuning Linux and Web server Software . For example, the optimal caching mechanism, the use of keepalive connections, the allocation of time-sensitive tasks to different thread pool can significantly improve performance; For example, the thread pool can accelerate disk-sensitive tasks by an order of magnitude.
We want you to try these techniques yourself. We would like to know what you said about the various performance improvement cases you have made.
Online Resources
- Statista.com–share of the Internet economy in the gross domestic product in G-20 countries in 2016
- Load impact–how Bad performance impacts Ecommerce Sales
- Kissmetrics–how Loading time affects Your Bottom line (infographic)
- Econsultancy–site speed:case Studies, tips and tools for improving your conversion rate
- "NGINX Content Cache"
- "NGINX Performance Tuning Guide"
- "NGINX Plus Administrator's Guide Reference document"
- "HTTPS Connection"
- "Encrypted TCP Connection"
- "Http/2 for Web applicationdevelopers white paper"
10 tips to increase Web application performance by 10 times times