Website performance optimization

Source: Internet
Author: User
Tags cache computer website performance nginx ssl nginx load balancing

1th website front-end performance optimization

The front-end has changed a lot in recent years, various tools, libraries, frameworks concurrency. Nevertheless, the idea of optimizing the front-end performance of the website has been largely unchanged.
Why is front-end performance so important? Data display:
1) Only 10%~20% end-user response time is spent on downloading HTML documents. The rest of the 80%~90% time is spent in all the components of the download page;
2) Another point is that the optimization of the background needs to spend a large cost, optimize the front-end only need to properly follow some laws will have a greater increase, relatively low cost and high yield.

This article is based on the browser request and response to the entire process as a clue, each process is optimized processing.

The process of browser requests and responses

First step, browser preprocessing

Query cache: Read the cache or send a 304 request

Step two, query DNS

Optimization rules – reduce DNS lookups
Simply put, DNS lookup is the process of entering a domain name for the server IP address. The DNS cache is also divided into the browser DNS cache, the operating system DNS cache. When you enter the, the browser will first go to its own DNS cache to find the IP address of the Google server, if not found to continue to the operating system DNS cache lookup, if the browser does not find Google's IP address record in either of these containers, it will be looked up to the wide domain domain Name System.

    • Method 1: Use the DNS cache
      Browser DNS cache computer DNS cache server DNS cache (TTL)

    • Method 2: Use the keep-alive attribute
      When the client's DNS cache is empty, the number of DNS lookups is equal to the number of unique host names in the Web page. Reducing the number of unique host names can reduce the number of DNS lookups.

    • Method 3: Fewer domain names to reduce DNS lookups (2-4 hosts)

    • Method 4: Use third-party DNS domain name resolution to accelerate services
      A free DNS Accelerator service dnspod in China;

Step three, establish the connection

Optimization Rules – Using a Content distribution network (CDN)
CDN is a set of Web servers distributed in many different geographical locations, because the distance between the user physical distance is relatively short, so it is more advantageous to users to obtain static resources, this service usually needs to purchase, there are some free, universal CDN can be used, domestic can use BOOTCDN.
PS: Personal advice , the best place to enable CDN is the last step, and so on the site itself after the optimization has been done, then enable CDN can obviously see the optimization effect. (After the CDN, due to the reason of the CDN cache, the observation site itself optimization is not very convenient);

    • Method 1: Top ten Internet sites and CDN service providers in the United States:

    • Method 2: The page is static, depending on the publishing system

    • Method 3:ctrip used China-cache and the web

Optimization Rules – dividing page content with domain names
Divide the domain name by page content and store files on the appropriate resource server

Fourth step, send request

Optimization Rules – Reduce HTTP requests
In general, it is more advantageous to use scripts and style sheets that are outside the chain. Merging the outer-chain script and the style sheet separately reduces the HTTP request to speed up the page opening by saving the number of communication between the client and the server. But for the convenience of development, the development of the general will be a modular approach, this time you can use some front-end building tools before deployment to merge these module files and then publish.

HTTP request 30-40, merge file, image map, inline image
JS file (no more than 7)
No more than 4 CSS files, the first page of each channel and the first page of the entire site no more than 3.
The number of requests for allyes ads that cannot be resolved at this time.
A large number of ads and product images may result in a large number of picture requests, may result in a tight total number of requests indicators, this can only be done from the design, need to weigh
The current old page may be CSS and JS file requests may exceed the number of

Merging styles and scripts

Optimization Rules – Optimizing CSS Spirite
Image Map Ctrip Home example

optimization rules – avoid 404 errors
Avoid internal Invalid links

Optimization Rules – do not use frameset, use less iframe
Search engine unfriendly, even if the content is empty, loading also takes time, will prevent page loading
It is forbidden to use IFRAME to introduce external resources, excluding Allyes ads, excluding empty pages of About:blank.

Fifth step, waiting for response

Optimization Rules – Avoid redirects
The following is a redirect process:
The browser sends the request--the server returns 302--> the browser sends a second request-–> the server returns 200-> the browser starts rendering
That is, there is nothing to display to the user until the redirect is complete and the HTML is downloaded.

Involves server load, data query, server-side cache, etc.

Sixth step, receive data

Optimization Rules – Compression components
Text responses to HTML documents, scripts, and style sheets, XML, and JSON
Compression can typically reduce the amount of data in response by nearly 70%

Optimization Rules – reduce JavaScript and CSS file sizes
Remove unnecessary characters from your code to reduce their size and load time.

There are several ways to reduce the size of JavaScript files:

    1. It is generally widely used for streamlining . Streamlining is the removal of extra characters such as spaces, comments, and so on in JavaScript code, which is basically no disadvantage.
    2. Another way is to confuse . Confusion is on the basis of simplification, the function, variable names with shorter characters, so as to reduce the file size effect. But confusion can cause a lot of trouble, and it is likely to introduce errors, although it helps to prevent reverse engineering, while also increasing the difficulty of commissioning the environment on-line.
      It is now common practice to compress resources by using automated build tools such as Gulp, Grunt, etc. before release.

Optimization Rules – Reduce page size as much as possible
Page must be less than 150K (no picture)

    • Whether the static file is gzip
    • Whether the picture is compressed optimized
Seventh step, read the cache

Optimization Rule – Add Expires header or Cache-control
The Expires header is used to tell the browser the validity period of the response, which can be understood as the "shelf life" of the resource, and the cache for which the resource can be used within the term does not require a re-request. Because of the clock synchronization problem with the browser and the server, HTTP1.2.1 also added Cache-control and max-age to compensate for the lack of expires head. Typically used for static resources such as scripts, stylesheets, and pictures.

The problem with using this strategy is that developers may want to update the resources before they expire. At this point, because the browser's cache has not yet expired, this requires changing the file name to force the static resource to fail. There are many ways to put a version number of static resources, you can seriously hit the digital version number, according to the content generated hash code is also OK, and even some people use π to their own resources to play version number.
Apply to components that do not change frequently, including scripts, stylesheets, flash components, pictures

Optimization Rules – using out-of-the-chain JavaScript and CSS
Use out-of-the-chain JavaScript and CSS as much as possible, because most of our current avascript and CSS are made with gzip and cache technology to make the most of it.
Using the outer chain style and script is a bit:
1. Can be cached by the browser; 2. Components can be reused 3. Can be modular, 4. Able to be built (merge compression hit version)

Eighth step, handling elements

Do not gzip compress binary files such as image and PDF

Eighth step, Render element

Optimization Rules – place style sheets on top
If you put the style sheet at the bottom, the browser will delay the display of any visual components. In addition, using CSS @import is equivalent to placing the style you want to load at the bottom, so it is not recommended. For the browser rendering mechanism, this book is not too much to mention, just to describe the phenomenon and provide a solution.

If the stylesheet is still loading, building the rendering tree is a waste because there is no need to draw anything until all the stylesheets have been loaded and parsed. Otherwise, the display content will encounter Fouc (flicker of no style content, flash of unstyled contents) before it is ready.
That is, if you do not put the style sheet in, when the style is encountered, the browser will prevent the page rendering, waiting for the style sheet to download.

If you put the style sheet at the bottom, there will be white screen phenomenon in IE. In short, putting the stylesheet in can avoid these problems.

Optimization Rules – we recommend placing the script at the bottom
The general browser can allow parallel download, depending on the number of hosts, bandwidth, etc. (by default, IE is 2 and FF is 8) Download script when parallel download is actually disabled.

The impact of the script on the page is:

    1. Blocking rendering of the content behind it
    2. Block download of later components
    3. Browsers block parallel downloads when downloading scripts, as it is necessary to ensure that scripts are executed sequentially.

Here's a more in-depth article about this:
JS must be placed at the bottom of the body? Talk about the browser rendering mechanism

However, in practical development sometimes it is difficult to fully abide by this rule, it can only be put in the last place in the end.

Optimization Rules – Remove duplicate scripts
Must be 0, repeat the script to increase the number of HTTP requests and the time the script executes.

Optimization Rules – Avoid CSS expressions
Expression () using CSS often results in multiple operations. affects browser rendering time. In fact, where CSS expressions are needed, it is often possible to find alternative alternatives, so avoid using CSS expressions.

Optimization Rules – Optimizing Images
Use GIF and PNG as much as possible

Try to use png/gif format pictures, PNG pictures First, but must note that if you want to be compatible with IE6, PNG use must pay attention to the transparency problem.

Images must be optimized with tool compression before use (PNG, JPG)

2nd. Website Backend Performance optimization

Web performance covers a wide range of areas, but most Web developers have experienced performance problems since the program went live. The general performance is that the page speed starts to slow down dramatically, the normal access time becomes very long, or simply throws you an exception error page. There are a number of possible scenarios, with examples of the most important ones to take place:

    • The database connection exceeds the maximum limit, which generally manifests as the program's connection pool is full and the connection to the database is denied.
    • Database deadlock
    • Web Server exceeds the maximum number of connections (typically limited on a virtual host)
    • Memory leaks
    • Too many HTTP connections, that is, more traffic than the machine and software are designed to provide services
2.1 Reverse Proxy

Improve performance and increase security with reverse proxy
If your Web application is running on a single machine, this approach can significantly improve performance: just a faster machine, better processors, more memory, faster disk arrays, and more. Then the new machine will be able to run your WordPress server, node. JS programs, Java programs, and other programs faster. (If your program is going to access the database server, the solution is still simple: add two faster machines and use a faster link between the two computers.)
The problem is that machine speed may not be a problem. Web programs run slowly often because computers are always switching between different tasks: Accessing files from disk, running code, and so on, through thousands of connections and user interaction. Application servers may jitter thrashing-such as insufficient memory, swapping memory data to disk, and having multiple requests waiting for a task to complete, such as disk I/O.
You can take a completely different approach to replacing the upgrade hardware: Add a reverse proxy server to share some of the tasks. The reverse proxy server is located at the front end of the machine running the application and is used to handle network traffic. Only the reverse proxy server is directly connected to the Internet, and the application server communication is done through a fast internal network.
Using a reverse proxy server frees the application server from waiting for the user to interact with the Web program, so that the application server can focus on building the Web page for the reverse proxy server so that it can be transferred to the Internet. The application server does not need to wait for the client to respond, and it can run at a speed close to the optimized performance level.
Adding a reverse proxy server can also provide flexibility for your Web server installation. For example, if a certain type of server is overloaded, you can easily add another same server, and if a machine goes down, it can easily replace a new one.
because of the flexibility of the reverse proxy, the reverse proxy is also a necessary prerequisite for some performance acceleration functions, such as

    • Load Balancing (See Tip #2) – load balancing runs on a reverse proxy server and is used to distribute traffic evenly to a batch of applications. With the right load balancing, you can add an application server without having to modify the app at all.
    • Caching static Files (See Tip #3) – Directly read files, slices or client code that can be saved in a reverse proxy server and then sent directly to the client, which can increase the speed and share the load of the application server, allowing the application to run faster.
    • Web Site Security – Reverse proxy servers can improve site security and quickly detect and respond to attacks to ensure that the application server is protected.

NGINX software is specially designed for use as a reverse proxy server, and it also contains many of the above functions. Nginx handles requests in an event-driven manner, which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as app health checks, designed to handle request routing, advanced Buffering, and related support.

2.2 Adding load Balancing

Adding a Load balancer server is a fairly simple way to improve performance and site security. Instead of getting the core Web servers larger and stronger, use load balancing to distribute traffic to multiple servers. Even if the program is poorly written, or has difficulty in scaling, the user experience can be improved only by using a Load balancer server.
The Load Balancer server is first a reverse proxy server (see tip #1)-it accepts traffic from the Internet and then forwards the request to another server. In particular, the Load Balancer server supports two or more application servers and uses the allocation algorithm to forward requests to different servers. The simplest method of load balancing is the rotation method round
Robin, each new request will be sent to the next server in the list. Other methods of replication equalization include sending requests to the server with the fewest active connections. NGINX Plus has the ability to assign a specific user's session to the same server.
Load balancing can improve performance because it avoids overloading a server and other servers have no traffic to handle. It can also simply expand the size of the server because you can add multiple servers with relatively inexpensive prices and make sure they are fully utilized.
Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, scgi, Uwsgi,
Memcached, and several other types of applications, including TCP-based applications and other 4th-tier protocols. Analyze your web app to determine which ones you want to use and where you're not performing enough.
The same server or server farm can be used for load balancing or other tasks, such as SSL terminal Servers, http/1.x and HTTP/2 for clients.
Requests, and caches static files.
NGINX is often used for load balancing; For more information, download our ebook, "Five Reasons to choose a software load balancer." You can also learn basic configuration guidance from the "using Nginx and Nginx plus configuration load balancer, part One" in the Nginx Plus Administrator guide with full nginx load balancing documentation. Our commercial version of NGINX Plus supports more optimized load balancing features, such as load balancing on server response time-based loading routes and the Microsoft ' s NTLM protocol.

2.3 Caching of static and dynamic content

Caching can improve the performance of Web applications by accelerating the speed of content transfer. It can take several strategies: preprocess the content to be transferred when needed, save the data to a faster device, store the data closer to the client, or combine these methods.
There are two different types of data buffering:
Static content caching. Infrequently changing files, like (JPEG, PNG) and code (CSS,JAVASCRIPT), can be saved on a perimeter server so that it can be quickly extracted from memory and disk.
Dynamic content caching. Many web apps generate a new HTML page for each page request. By simply caching the generated HTML content in a short period of time, you can reduce the amount of content you want to generate, and the pages are new enough to meet your needs.
For example, if a page is browsed 10 times per second, you cache it for 1 seconds, and 90% of the requested pages are extracted directly from the cache. If you cache static content separately, even the newly generated pages may be composed of these caches.
The following are three major caching technologies invented by Web applications:
Shorten the network distance between the data and the user. Reduce the transfer time by placing a copy of the content closer to the user's node.
Increase the speed of the content server. Content can be saved on a faster server to reduce the time to extract files.
Remove data from the overloaded server. Machines often cause a task to be executed at a slower rate than the test results because of some other task being done. Caching data on different machines can improve the performance of both cached and non-cached resources, because the host is not overused.
The caching mechanism for Web apps can be implemented inside the Web application server. First, caching dynamic content is used to reduce the time that the application server loads dynamic content. Second, caching static content, including temporary copies of dynamic content, is a further contribution to the application server load. and the cache is then moved from the application server to a faster, closer machine to the user, reducing the pressure on the application server and reducing the time it takes to extract data and transfer data.
The improved caching scheme can greatly improve the speed of the application. For most Web pages, static data, such as large image files, makes up more than half of the content. Without caching, this can take a few seconds to extract and transfer such data, but it takes less than 1 seconds to complete after caching.
To give an example of how caching is used in practice, Nginx and Nginx Plus use two instructions to set the caching mechanism: Proxy_cache_path and Proxy_cache. You can specify the location and size of the cache, the maximum amount of time the file is saved in the cache, and some other parameters. Using the third (and quite popular) directive Proxy_cache_use_stale, if the server that provides fresh content is busy or hangs up, you can even let the cache provide older content so that the client does not get nothing. From a user's point of view this can be a good way to improve the time available for your website or app.
NGINX Plus has a premium cache feature, including support for cache cleanup and display of cache status information on the dashboard.
To get more information about Nginx's caching mechanism, you can browse the "references" and "Nginx content caching" in the Nginx Plus Administrator's Guide.
Note: The caching mechanism is distributed between the application developer, the investment decision maker, and the actual system OPS personnel. Some of the complex caching mechanisms mentioned in this article are valuable from a DEVOPS perspective: Engineers who are integrated with the functionality of application developers, architects, and operations operators can meet their needs for site functionality, response time, security, and business results, such as the number of transactions completed.

2.4 Compressing data

Compression is an accelerating method with great potential to improve performance. There are now a number of well-designed and high-compression standards for files such as photos (JPEG and PNG), Video (MPEG-4), and Music (MP3). Each of these standards reduces the size of the file more or less.
Text data-including HTML (containing plain text and HTML tags), CSS and code, such as javascript--, are often transmitted uncompressed. Compressing such data can have a greater impact on the perception of application performance, especially on slow or restricted mobile networks.
This is because text data is often an effective way for users to interact with a Web page, and multimedia data may be more of a support or decorative function. Intelligent content compression reduces bandwidth requirements for HTML,JAVASCRIPT,CSS and other text content, typically reducing bandwidth by 30% or more and corresponding page load times.
If you use SSL, compression reduces the amount of data that needs to be SSL-encoded, which consumes some CPU time and offsets the reduction in compressed data.
There are many ways to compress text data, for example, in HTTP/2, the compression mode of the novel text adjusts the head data in particular. Another example is the ability to open in NGINX using gzip compression. Once you have compressed the text data in your service, you can use the gzip_static instruction directly to process the compressed. GZ version.

2.5 Optimizing SSL/TLS

The Secure Sockets (SSL) protocol and its next-generation Transport Layer Security (TLS) protocol are being used by more and more Web sites. SSL/TLS encryption of data sent from the original server to the user improves the security of the site. Part of the reason for this trend is that Google is using SSL/TLS, which is a positive factor in search engine rankings.
Despite the increasing popularity of SSL/TLS, the impact of using encryption on speed has also deterred many websites. SSL/TLS causes the site to become slower for two reasons:
The handshake process for any one connection is required to pass the key at the first connection. Browsers that use the HTTP/1.X protocol repeat the above for each connection when they establish multiple connections.
Data in the transmission process needs to be constantly encrypted on the server side, decryption on the client.
To encourage the use of SSL/TLS,HTTP/2 and
The authors of SPDY (described in the next chapter) have devised a new protocol to allow the browser to use only one connection to a single browser session. This will greatly reduce the time wasted on the first of these reasons. However, it can now be used to improve application
The way that SSL/TLS transmits the performance of data is more than that.
The Web server has a corresponding mechanism to optimize the SSL/TLS transmission. For example, NGINX uses OpenSSL to run on normal hardware to provide near-dedicated hardware transmission performance. NGINX SSL performance has detailed documentation, and the time and CPU utilization of SSL/TLS data is reduced by a lot.
Further, refer to this article to learn how to improve
More details on SSL/TLS performance can be summed up as a few points:
Session buffering. Use the directive Ssl_session_cache to cache the parameters used by each new SSL/TLS connection.
The session ticket or ID. Keeping the SSL/TLS information in a ticket or ID can be reused smoothly without having to shake hands again.
OCSP segmentation. Reduce handshake time by caching SSL/TLS certificate information.
Nginx and Nginx Plus can be used as the SSL/TLS server to handle the encryption and decryption of client traffic while communicating with other servers in clear-text mode. To set Nginx and Nginx Plus as the SSL/TLS server, see HTTPS connection and encrypted TCP connection.

2.6 Using HTTP/2 or SPDY

For sites that already use SSL/TLS, HTTP/2 and SPDY can improve performance better because each connection requires only one handshake. For non-SSL/TLS use
Site, HTTP/2 and SPDY will make it less stressful to migrate to SSL/TLS from a responsive point of view (which would otherwise reduce efficiency).
Google started to recommend SPDY as a faster protocol than http/1.x in 2012. HTTP/2 is the standard currently adopted by the IETF and is based on Spdy. SPDY has been widely supported, but will soon be replaced by HTTP/2.
The key to SPDY and HTTP/2 is to use a single connection instead of a multi-channel connection. A single connection is reused, so it can carry multiple requests and response shards at the same time.
By using a single connection, these protocols can avoid establishing and managing multiple connections like in a browser that implements http/1.x. A single connection is particularly effective with SSL because it minimizes the handshake time when SSL/TLS establishes a secure link.
The SPDY protocol requires SSL/TLS, while the HTTP/2 official standard is not required, but currently all browsers that support HTTP/2 can use it only if SSL/TLS is enabled. This means that browsers that support HTTP/2 only enable HTTP/2 if the Web site uses SSL and the server receives HTTP/2 traffic. Otherwise, the browser will use the http/1.x protocol.
When you implement SPDY or HTTP/2, you no longer need those regular HTTP performance optimizations, such as split by domain, resource aggregation, and image flattening. These changes can make your code and deployment easier and easier to manage. To understand the changes that HTTP/2 brings, you can browse our White paper. As a sample to support these protocols, NGINX has supported SPDY from the outset, and most of them use
The web site of the SPDY protocol is NGINX. Nginx also early support for HTTP/2, from September 2015 onwards, open-source version of Nginx and Nginx Plus support it.
Over time, we NGINX want more sites to fully enable SSL and migrate to HTTP/2. This will improve security while also finding and implementing new optimizations that simplify code performance.

2.7 Upgrading the software version

An easy way to improve application performance is to select your software stack based on the evaluation of software stability and performance. Further, because developers of high-performance components are more likely to pursue higher performance and resolve bugs, it is worth using the latest version of the software. The new version is often more concerned by the developer and user community. Newer versions tend to take advantage of new compiler optimizations, including tuning new hardware.
A stable new version usually has better compatibility and higher performance than the older version. Software updates keep the software up-to-date, so you can maintain optimal optimization, resolve bugs, and improve security.
Always using legacy software will also prevent you from taking advantage of new features. For example, the above mentioned HTTP/2, currently requires OpenSSL 1.0.1. 1.0.2 will be required in the mid-2016 and it was released in January 2015.
Nginx users can start migrating to Nginx's newest open source software or nginx Plus; they all contain the latest capabilities, such as the socket and thread pool (see below), which are already optimized for performance. Then take a good look at your software stack and upgrade them to the latest version you can upgrade to.

2.8Linux System Performance Tuning

Linux is the operating system used by most Web servers, and as a basis for your architecture, Linux obviously has many possibilities for performance. By default, many Linux systems are set up to use very few resources to match typical desktop applications. This means that Web applications require some fine tuning to achieve maximum performance.
The Linux optimizations here are specifically for Web servers. In NGINX, for example, there are some changes that need to be emphasized when accelerating Linux:
Buffer queue. If you have a pending connection, then you should consider increasing the value of Net.core.somaxconn, which represents the maximum number of connections that can be cached. If the connection limit is too small, then you will see the error message, and you can gradually increase this parameter until the error message stops appearing.
The file descriptor. Nginx uses up to 2 file descriptors for a connection. If your system has many connection requests, you may need to increase the sys.fs.file_max to increase the overall limit on the number of file descriptors in order to support increasing load requirements.
Temporary port. When using a proxy, Nginx creates a temporary port for each upstream server. You can set Net.ipv4.ip_local_port_range to increase the range of these ports and increase the available port numbers. You can also reduce the timeout of inactive ports to reuse ports, which can be set by Net.ipv4.tcp_fin_timeout, which can quickly increase traffic.
For Nginx, you can check the Nginx
Performance Tuning Guide to learn if you optimize your Linux system so that it can adapt well to large-scale network traffic without exceeding the working limit.

2.9web Server Performance Tuning

No matter which Web server you use, you need to optimize it to improve performance. The following recommended methods can be used for any Web server, but some settings are for Nginx. Key optimization measures include:
Access logs. Do not write the log of each request directly back to disk, you can cache the log in memory and then bulk write back to disk. For Nginx, adding a parameter to the instruction Access_log Buffer=size allows the system to write the log to disk if the cache is full. If you add the parameter flush=time, the cached content will be written back to disk every once in a while.
Cache. The cache stores a partial response in memory until it is full, which can make communication with the client more efficient. A response that doesn't fit in memory is written back to the disk, which reduces performance. When NGINX is enabled for caching, you can use Directives proxy_buffer_size and proxy_buffers to manage the cache.
The client is keepalive. KeepAlive connections can reduce overhead, especially when using SSL/TLS. For NGINX, you can increase the maximum number of connections starting from the default value of Keepalive_requests 100, so that a client can request multiple times on a specified connection, and you can also increase the keepalive_ The value of timeout allows the keepalive connection to survive longer, which allows subsequent requests to be processed more quickly.
Upstream keepalive. Upstream connections-that is, connections to applications servers, database servers, and other machines-also benefit from connection keepalive. For upstream connections, you can increase keepalive, which is the number of idle keepalive connections per worker process. This can increase the number of times a connection is reused and reduce the number of times a new connection needs to be reopened. More information on keepalive connections can be found in this "HTTP keepalive connection and performance".
Limit. Restricting the resources used by clients can improve performance and security. For NGINX, instruction Limit_conn and Limit_conn_zone limit the number of connections for a given source, while limit_rate limits bandwidth. These restrictions can prevent legitimate users from picking up resources and avoiding attacks. Directives Limit_req and Limit_req_zone restrict client requests. For upstream servers, server directives in the upstream configuration block can use the Max_conns parameter to limit the number of connections to the upstream server. This avoids overloading the server. The associated queue instruction creates a queue to hold a specific number of requests within a specified length of time when the number of connections reaches the Max_conns limit.
Worker process. The worker process is responsible for processing the request. NGINX uses an event-driven model and operating system-specific mechanisms to effectively distribute requests to different worker processes. This recommendation is recommended to set worker_processes for each CPU one. The maximum number of worker_connections (default 512) can be increased on most systems as needed, experimentally finding the best value for your system.
Socket segmentation. Typically a socket listener assigns a new connection to all worker processes. Socket splitting creates a socket listener for each worker process, so that when the socket listener is available, the kernel assigns the connection to it. This can reduce lock contention and improve the performance of multicore systems by adding the Reuseport parameter to the Listen directive to enable socket separation.
The thread pool. The computer process may be occupied by a single, slow operation. For Web server software, disk Access can affect many faster operations, such as computing or copying in memory. After using the thread pool, slow operations can be assigned to different sets of tasks, and the master process can run fast operations all the time. When the disk operation is complete, the result is returned to the main process loop. In Nginx there are two operation--read () system calls and Sendfile ()--are assigned to the thread pool technique. When changing the settings of any operating system or support service, change only one parameter at a time and then test the performance. If the changes are causing the problem, or you can't make your system faster, then change back.

2.10 Monitoring system activity to solve problems and bottlenecks

The key to making the system very efficient in application development is to monitor the performance of your system in the real world. You must be able to monitor program activity through specific devices and your Web infrastructure.
Monitoring activity is the most positive-it tells you what's going on and leaves the problem to your discovery and eventual resolution.
Monitoring can reveal several different issues. They include:
Server downtime.
The server is out of a problem and is losing connectivity.
A large number of cache misses have occurred on the server.
The server did not send the correct content.
The application's overall performance monitoring tools, such as New Relic and Dynatrace, can help you monitor the time it takes to load a webpage from the remote, and nginx can help you monitor the application's delivery side. When you need to consider adding capacity to your infrastructure to meet your traffic needs, application performance data can tell you that your optimizations really work.
To help developers find and solve problems quickly, NGINX Plus adds application-aware health checks-a comprehensive analysis of recurring routine events and alerts you when problems arise. NGINX Plus also provides session filtering, which prevents new connections from being accepted until the current task is completed, and another feature is slow start, allowing a server that recovers from errors to catch up with the progress of the load-balanced server farm. When used properly, the health check lets you find it before the problem becomes serious and affects the user experience, while session filtering and slow start can allow you to replace the server, and the process does not negatively affect performance and uptime. It shows the built-in
NGINX Plus Module Real-time activity monitoring of the dashboard, including the server farm, TCP connection and cache information and other WEB schema information.

2.11 Summary: See the effect of 10 times times performance improvement

These performance-boosting scenarios are available and work well for any Web application, and the actual results depend on your budget, the time you can spend, and the gaps in your current implementation. So how do you achieve 10 times-fold performance gains for your own applications?
To guide you through the potential impact of each optimization approach, here is a key point for each of the optimization methods detailed above, although your situation must be very different:

    1. The reverse proxy server and load balancer are not load balanced or poor load balancing can cause intermittent performance lows. Adding a reverse proxy, such as NGINX, prevents the Web application from fluctuating between memory and disk. Load Balancing can transfer the task of an overloaded server to an idle server and can be easily scaled up. These changes can have a huge performance boost, which can easily be 10 times times better than the worst performance of your current implementations, and may not be much better for overall performance, but there are substantial improvements.
    2. caching dynamic and static data if you have an overburdened Web server, there is no doubt that your application server will be able to increase performance by 10 times times the peak time by caching Dynamic Data. Caching of static files can improve performance several times.
    3. compressed data Using media file compression format, compared to format JPEG, graphics format PNG, video format MPEG-4, music file format MP3 can greatly improve performance. Once these are used, then compressing the file data can increase the initial page load speed by up to twice times.
    4. Optimizing the SSL/TLS security handshake can have a huge impact on performance, and optimizing them may result in a twice-fold increase in the initial response, especially for sites with large amounts of text. Optimizing the media files under SSL/TLS will only produce a small performance boost.
    5. using HTTP/2 and SPDY when you use SSL/TLS, these protocols can improve the performance of your entire site.
    6. tuning Linux and Web server software such as optimizing caching mechanisms, using keepalive connections, assigning time-sensitive tasks to different thread pools can significantly improve performance; For example, the thread pool can accelerate disk-sensitive tasks by an order of magnitude.

Website performance optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.