Nginx optimization-including HTTPS, keepalive, etc.

Source: Internet
Author: User
Tags sendfile switches keep alive nginx server iqiyi

Nginx optimization-including HTTPS, keepalive, etc.

First, Nginx Tcp_nopush, Tcp_nodelay, Sendfile

1. Tcp_nodelay How can you force the socket to send data in its buffer? One solution is the tcp_nodelay option for the TCP stack. This allows the data in the buffer to be sent out immediately.

The Tcp_nodelay option of Nginx allows the tcp_nodelay option to be added when opening a new socket. However, there is a situation where the terminal application sends a packet each time a single operation occurs, and typically a packet has a byte of data and a 40-byte header, resulting in a 4,000% overload, which can easily cause congestion on the network. to avoid this situation, the TCP stack implements the wait data for 0.2 seconds, so it does not send a packet after the operation, but instead makes the data in a large package during that time. This mechanism is guaranteed by the Nagle algorithm .

Nagle later became a standard and was immediately implemented on the Internet. It is now the default configuration, but in some cases it is desirable to turn this option off. Now suppose an application makes a request to send a small chunk of data. We can choose to send the data immediately or wait for more data to be generated and then send the two policies again. If we send data right away, interactive and client/server-based applications will greatly benefit. If the request is issued immediately then the response time will be faster. The above operation can be done by setting the Tcp_nodelay = on option of the socket, which disables the Nagle algorithm. (No need to wait for 0.2s)

2, Tcp_nopush in Nginx, tcp_nopush configuration and Tcp_nodelay "mutually exclusive". It can configure the packet size for sending data at once. In other words, it does not send packets after 0.2 seconds of accumulation, but sends them when the package accumulates to a certain size.

Note: In Nginx, Tcp_nopush must be used in conjunction with Sendfile.

3, Sendfile now popular Web servers are available in the Sendfile option to improve server performance, what exactly sendfile is, how to affect performance? Sendfile is actually a system call after linux2.0+, and the Web server can decide whether to take advantage of the Sendfile system call by adjusting its configuration. Let's take a look at the traditional network transfer process without sendfile: Read (file,tmp_buf, Len); Write (Socket,tmp_buf, Len);

HDD >> kernel buffer >> user buffer>> kernel socket buffer >> protocol stack

1) in general, a network application is to read the hard disk data, and then write data to the socket to complete the network transmission. The above 2 lines explain this in code, but the above 2 lines of simple code mask Many of the underlying operations. Let's see how the bottom line executes the above 2 lines of code:

    1. The system calls read () to produce a context switch: switch from user mode to kernel mode, then perform a copy of the DMA and read the file data from the hard disk into a kernel buffer.
    2. The data is copied from kernel buffer to user buffer, and then the system calls read () to return, creating a context switch: switch from kernel mode to user mode.
    3. The system calls write () to produce a context switch: switch from user mode to kernel mode, and then read step 2 to the data copy of user buffer to kernel buffer (the 2nd copy of the data to kernel buffer), But this time it's a different kernel buffer, and this buffer is associated with the socket.
    4. The system calls write () to return, resulting in a context switch: Switching from kernel mode to user mode (4th switch), and then DMA copying data from kernel buffer to the protocol stack (4th copy).

The above 4 steps have 4 context switches and 4 copies, and we find that reducing the number of switches and the number of copies will effectively improve performance. In the kernel2.0+ version, System call Sendfile () is used to simplify the above steps to improve performance. Sendfile () not only reduces the number of transitions but also reduces the number of copies.

2) Take a look at the process of network transmission with Sendfile (): Sendfile (Socket,file, Len);

HDD >> kernel buffer (fast copy to Kernelsocket buffer) >> protocol stack

    1. The system calls Sendfile () to copy the hard drive data to kernel buffer via DMA, and the data is copied directly to the other socket-related kernel buffer kernel. There is no switch between user mode and kernel mode, and a copy from buffer to buffer is completed directly in the kernel.
    2. DMA copies the data directly from the Kernelbuffer to the protocol stack, without switching or requiring data to be copied from user mode to kernel mode, since the data is in kernel.

The steps are reduced, the switchover is reduced, the copy is reduced, and the natural performance is increased. This is why it is said that opening the sendfile on option in the Nginx configuration file can improve the performance of Web server.

In conclusion, all three parameters should be configured as On:sendfile on; Tcp_nopush on; Tcp_nodelay on;

Second, the Nginx long connection--keepalive

When using Nginx as a reverse proxy, two points are required to support long connections:

    • The connection from client to Nginx is a long connection
    • Connections from Nginx to server are long connections

1. Maintain long connection with client:

By default, Nginx has automatically turned on keep alive support for client connections (and HTTP requests keep alive) that are sent by the client. General scenarios can be used directly, but for some of the more specific scenarios, it is necessary to adjust individual parameters (Keepalive_timeout and keepalive_requests).

HTTP {    keepalive_timeout  120s 120s;    

1) keepalive_timeout Syntax:

Keepalive_timeout timeout [header_timeout];

    1. First parameter: Set the timeout value (default 75s) that the Keep-alive client connection will remain open on the server side, and a value of 0 disables the keep-alive client connection;
    2. The second parameter: optional, set a value of "Keep-alive:timeout=time" in the header field of the response, which can usually be used without setting;

Note: keepalive_timeout default 75s, generally also enough, for some of the larger requests for internal server communication scenarios, the appropriate increase of 120s or 300s;

2) The keepalive_requests:keepalive_requests instruction is used to set the maximum number of requests that can be served on a keep-alive connection, and the connection is closed when the maximum number of requests is reached. The default is 100. The true meaning of this parameter is that after a keep alive is established, Nginx will set a counter for this connection, recording the number of client requests that have been received and processed on the long connection of this keep alive. If the maximum value of this parameter setting is reached, Nginx will forcibly close the long connection, forcing the client to re-establish a new long connection. In most cases, the default value of 100 is adequate when the QPS (requests per second) is not very high. However, for some scenarios where the QPS is higher (for example, more than 10000QPS, or even up to 30000,50000), the default of 100 is too low. As a simple calculation, when qps=10000, the client sends 10,000 requests per second (usually with multiple long connections), each connection can only run up to 100 requests, which means that an average of 100 long connections per second is therefore closed by nginx. It also means that in order to maintain a QPS, the client has to re-create 100 new connections per second. Therefore, you will find a large number of time_wait socket connections (even if keep alive is already in effect between the client and Nginx). therefore, for higher QPS scenarios, it is necessary to increase this parameter in order to avoid a large number of connections are generated and discard the situation, reduce time_wait.

2, maintain and server long connection: In order to let Nginx and back end server (Nginx called upstream) to maintain a long connection between, the typical settings are as follows: ( the default Nginx access back end is a short connection (HTTP1.0), a request came, Nginx Open a new port and the backend to establish a connection, the backend after the completion of the active shutdown of the link)

HTTP {    Upstream  backend {        server   192.168.0.1:8080  weight=1 max_fails=2 fail_timeout=30s;        Server   192.168.0.2:8080  weight=1 max_fails=2 fail_timeout=30s;        KeepAlive;        This is important!    }server {        listen 8080 default_server;        Server_Name "";        Location/  {            proxy_pass Http://BACKEND;            Proxy_set_header Host  $Host;            Proxy_set_header x-forwarded-for $remote _addr;            Proxy_set_header x-real-ip $remote _addr;            Add_header Cache-control No-store;            Add_header Pragma  No-cache;            Proxy_http_version 1.1;         These two best also set            proxy_set_header Connection "";}}    

1) The location has two parameters to set:

HTTP {    server {location        /  {            proxy_http_version 1.1;//These two best also set            Proxy_set_header Connection "";        }    

The support for long connections in the HTTP protocol is from the 1.1 version, so it is best to set the proxy_http_version instruction to "1.1", while the "Connection" header should be cleaned up. Cleaning up the meaning, I understand, is to clean the HTTP header from the client, because even between the client and Nginx is a short connection between Nginx and upstream can also open a long connection. In this case, the "Connection" header from the client request must be cleaned up.

2) keepalive settings in upstream: The meaning of keepalive here is not to turn on or off long connected switches, nor to set timeout timeouts, or to set the maximum number of connections for a long connection pool. Official explanation:

    1. The connections parameter sets the maximum number of idle keepalive connections to upstream servers connections ( set to UPS Maximum number of idle keepalive connections for tream servers)
    2. When this number was exceeded, the least recently used connections is closed. ( When this number is breached, the least recently used connection will be closed)
    3. It should be particularly noted, the KeepAlive directive does not limit the total number of connections to upstream SE RVers that an nginx worker process can open. ( special Note: The keepalive directive does not limit the total number of Nginx worker processes to upstream server connections)

Let's assume a scenario: there is an HTTP service that receives requests as a upstream server with a response time of 100 milliseconds. If you want to achieve the performance of 10000 QPS, you need to establish approximately 1000 HTTP connections between Nginx and upstream servers. Nginx establishes a connection pool for this purpose, then requests a connection for each request, and when the request ends, the connection is placed in the connection pool and the status of the connection is changed to idle. Let us assume that the KeepAlive parameter setting for this upstream server is relatively small, such as the common 10.

A, assuming that the request and response is uniform and smooth, then the 1000 connections should be put back to the connection pool immediately after the subsequent request to use, the thread pool of idle threads will be very small, into 0, will not cause the number of connections repeatedly concussion.

B, the display in the request and response can not be smooth, we have 10 milliseconds for a unit, we look at the connection situation (note that the scene is 1000 threads +100 milliseconds response time, 10,000 requests per second), we assume that the answer is always smooth, but the request is not smooth, the first 10 milliseconds only 50, The second 10 millisecond has 150:

    1. In the next 10 milliseconds, there are 100 connection end request recycles connected to the connection pool, but assume that at this point the request is not evenly 10 milliseconds without an expected 100 requests coming in, but only 50 requests. Note that at this point the connection pool recycles 100 connections and allocates 50 connections, so there are 50 idle connections in the connection pool.
    2. Then look at the settings of the keepalive=10, which means that up to 10 idle connections are allowed in the connection pool. As a result, Nginx had to shut down 40 of the 50 idle connections, leaving only 10.
    3. The next 10 milliseconds, there are 150 requests in, there are 100 request end tasks to release the connection. 150-100 = 50, 50 connections are vacant, and 10 idle connections reserved by the front connection pool are lost, Nginx has to create 40 new connections to meet the requirements.

C, similarly, if you assume that the corresponding imbalance will also occur above the number of connections fluctuations.

One of the hands that caused the number of connections to repeat shocks is the maximum number of idle connections for this keepalive. After all 1000 connections in the connection pool are used frequently, the probability of having more than 10 idle connections in a short period of time is too high. therefore, in order to avoid the above connection oscillation, you must consider increasing this parameter, such as the above scenario if the keepalive is set to 100 or 200, it can be very effective buffer request and response is uneven.

Summary: keepalive This parameter must be carefully set, especially for the high QPS scenario, it is recommended to do a first estimate, based on the QPS and average response time can generally calculate the number of long connections required. For example, the previous 10000 QPS and the 100 millisecond response time can deduce that the required number of long connections is approximately 1000. Then set the keepalive to 10% to 30% of the number of long connections. More lazy classmate, can be directly set to keepalive=1000 and the like, generally OK.

3, comprehensive, there are a large number of time_wait 1) resulting in a large number of time_wait on the nginx side of the situation there are two kinds:

    • Keepalive_requests setting is relatively small, high concurrency exceeds this value after Nginx will force the shutdown and the client to maintain the keepalive long connection, (active shutdown after the connection caused Nginx time_wait)
    • KeepAlive settings are relatively small (the number of idle is too small), resulting in high concurrency will frequently occur in the connection number of shocks (more than this value will close the connection), keep off, open and back-end server to maintain the keepalive long connection;

2) resulting in a large number of time_wait on the backend server: Nginx does not open and the back end of the long connection, that is: no settings proxy_http_version 1.1; and Proxy_set_header Connection ""; This causes the backend server to close the connection each time, high concurrency will appear a large number of server side time_wait

Three, nginx configuration https

1. Configuration

server {    listen  default_server;    Listen          443 SSL;    server_name     toutiao.iqiyi.com  toutiao.qiyi.domain m.toutiao.iqiyi.com;    Root            /data/none;    Index           index.php index.html index.htm;    # # #ssl settings start    ssl_protocols                   TLSv1 TLSv1.1 TLSv1.2;    Ssl_certificate                 /usr/local/nginx/conf/server.pem;    Ssl_certificate_key             /usr/local/nginx/conf/server.key;    Ssl_session_cache               shared:ssl:10m;    Ssl_session_timeout             10m;    Ssl_ciphers all:!kedh! Adh:rc4+rsa:+high:+exp;    Ssl_prefer_server_ciphers on       ;    # # #ssl Settings End ...

2, performance comparison: access to Nginx via HTTPS is generally slower than HTTP access 30% (HTTPS access is mainly the CPU of the Nginx server) through the following experiments to verify:

    1. Nginx backend Hangs 5 java server, Java server has a simple Java program, from the Redis cache randomly read a value output to the front end; (The more Java servers hang, the greater the pressure on Nginx)
    2. Pressure measurement nginx,3000 concurrency, a total of 30,000 requests, the return of the results are 200 of the case for comparison; experimental results: A, server load comparison: HTTPS access, server CPU up to 20%, and HTTP access, the server CPU basically around 1% ; No matter the kind of access, Nginx server load, memory is not high; B, nginx throughput comparison (QPS):? HTTPS access, 30,000 requests took 28s; (3 times times http)? HTTP access, 30,000 requests took 9s;

Statistics QPS, each empty nginx log, and then pressurized, after execution, use the following command to view the QPS:

# Cat Log.2.3000https | grep '/api/news/v1/info?newsid= ' | awk ' {print$3} ' | Uniq | Wc-l37

Note: Can not continue to pressure, otherwise infinite pressure after the end of the Java service is often a bottleneck, resulting in a return to nginx response slowed, so that the nginx pressure becomes smaller.

3, Optimization: Nginx By default using the DHE algorithm to generate the key, the encryption algorithm is very inefficient. You can delete the KEDH algorithm by following the command below. Ssl_ciphers all:!kedh! Adh:rc4+rsa:+high:+exp;

Nginx optimization-including HTTPS, keepalive, etc.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.