Tcp_nodelay (1)

Source: Internet
Author: User
Tags file size sendfile

Forwarding Please note turn from: 100continue.iteye.com

Project Development reasons:When the Tengine receives the client post data and forwards it to the backend application server for processing, the buffer mode is enabled by default, which means that when the data sent by the client is small, Tengine will save all post data in memory before forwarding back-end application server, and when the client sends large data (based on the buffer size set in the configuration to differentiate whether the data is saved to a file), Tengine will save the post data to a temporary file to disk, and the temporary file will be read back to memory and forwarded back-end application server only after all the post data has been received. Therefore, when the access pressure is large and the post data exceeds the buffer size, the tengine will have a large number of IO operations, which can present a performance risk. The development of the request no buffering is to solve this performance problem, the design idea is, throughclient_body_postpone_sendingConfiguration Item Settings Store the memory buffer size of the post data, and then use the proxy_request_buffering and fastcgi_request_buffering switches to determine whether the request no buffering function is turned on. When request no buffering is turned on, the Tengine receives a post data that is larger than the buffer size set, and immediately forwards application server to the back end. This avoids the performance risks associated with the large number of IO operations mentioned earlier.Performance test questions: 1. Performance bottlenecks caused by TCP delay: Problem Description:During the performance test, it was found that, compared with the traditional buffer mode, when the request file size was slightly larger than the size of no buffer size (such as the request file size of 10k,no buffer 8K), the performance was very poor, the average user waiting time reached about 60ms, The QPS fell 100 times times.the reason:is due to the problem of TCP delay. TCP Delay is exactly the Nagle algorithm, designed to ensure network performance, because it allows the application to put arbitrary-sized data into the TCP stack-even if only one byte at a time. However, at least 40 bytes of tags and headers are loaded in each TCP segment, so if TCP sends a large number of packets containing a small amount of data, the performance of the network is severely degraded. The Nagle algorithm attempts to bind a large amount of TCP data together before sending a packet, encouraging the sending of segments with full dimensions (approximately 1500 bytes on the maximum size on the LAN and hundreds of bytes on the Internet). This will cause several HTTP performance issues. First, a small HTTP message may not fill a packet, and may cause delays in waiting for additional data that will never be brought. Second, there is a problem with the interaction between the Nagle algorithm and the delay acknowledgement--nagle algorithm will prevent the data from being sent, knowing that there is a confirmation packet arrives, but the acknowledgment packet itself will be delayed 100~200 milliseconds by the delay acknowledgement algorithm.Initial Scenario:Add the Tcp_nodelay on configuration entry in the configuration file to verify thatInvalid。 The reason is that the configuration entry takes effect only on the front end of the tengine, not the backend.The real solution: Add in code:   When Tcp_nodelay is on in the configuration file, the backend is also tcp_nodelay for data transfer. C Code    if  (clcf->tcp_nodelay && c->tcp_nodelay == ngx_tcp_ Nodelay_unset)  {           ngx_log_debug0 (NGX_LOG_DEBUG_ http, c->log, 0,  "Upstream tcp_nodelay");               tcp_nodelay = 1;               if  (setsockopt (c->fd, ipproto_tcp, tcp_nodelay,                                 (const void *)  &tcp_nodelay, sizeof (int))  == -1)            {                ngx_connectIon_error (c, ngx_socket_errno,                                       "setsockopt (tcp_nodelay)  failed");                ngx_http_upstream_finalize_request (r, u, 0);                return;            }               c->tcp_nodelay = ngx_tcp_nodelay_set;       }    
2. Performance issues when no delay is available at the front and back end of the Tengine: Problem Description:As mentioned above, the Nagle algorithm is designed to ensure network performance, when the TCP Nodelay is turned on, the protection of the Nagle algorithm expires when the request file size is slightly larger than the size of no buffer size (such as the request file size of 10k,no buffer Size is 8K) and with high access pressure, TCP transmits in the form of a large number of small chunks of data, resulting in low network utilization and a large number of packet transmissions.Solution:The application needs to weigh the size range of the uploaded data on the line, server hardware performance and other indicators, select a reasonable buffer size configuration, to avoid the large number of upload request data size in the buffer size range around. and enable monitoring policies to monitor the display of data traffic changes.

(2)

Today, when using Nginx as a Web cache, it is found that adding such parameters to HTTP can effectively improve the real-time response of the data, that is tcp_nodelay. Let's talk about the principle of tcp_nodelay:

Tcp_nodelay and Tcp_cork basically control the package "Nagle", here we mainly talk about Tcp_nodelay. The meaning of Nagle here is to use the Nagle algorithm to assemble smaller packages into larger frames. Johnnagle is the inventor of the Nagle algorithm, named after his name, which he first used in 1984 to try to solve the network congestion problem at Ford Motor Company (see IETF RFC 896 for more details). The problem he solves is the so-called silly window syndrome, which is called "Stupid windowing syndrome", meaning that because a universal terminal application sends a packet each time a keystroke is generated, a packet typically has a byte of data payload and a 40-byte header, The result is a 4,000% overload, which can easily cause congestion on the network. Nagle later became a standard and was immediately implemented on the Internet. It has now become the default configuration, but in our opinion, it is desirable to turn this option off in some situations.
Now let's assume that an application makes a request to send a small piece of data, such as a click-OK button in a SNS game. We can choose to send the data immediately or wait for more data to be generated and then send the two policies again. If we send data right away, interactive and client/server-based applications will greatly benefit. For example, when we are sending a short request and waiting for a larger response, the associated overload is relatively low compared to the amount of data transferred, and the response time is faster if the request is sent immediately. The above operations can be done by setting the socket's tcp_nodelay option, which disables the Nagle algorithm, setting tcp_nodelay on in Nginx, and placing it in the HTTP tag.
Another situation requires us to wait until the maximum amount of data sent over the network once all the data, this data transmission is beneficial to a large number of data communication performance, the typical application is the file server. Applying the Nagle algorithm will cause problems in this case. However, if you are sending large amounts of data, you can set the Tcp_cork option to disable Nagle, in a way that is exactly the opposite of Tcp_nodelay (Tcp_cork and Tcp_nodelay are mutually exclusive). Let's examine how it works.
Suppose the application uses the Sendfile () function to transfer large amounts of data (Nginx can set sendfile on). Application protocols usually require the sending of certain information to pre-interpret the data, which is actually the header content. Typically, the header is small and tcp_nodelay is set on the socket. Packets with headers will be transferred immediately, in some cases (depending on the internal packet counter), since this package is successfully received by the other party and needs to be confirmed by the other party. In this way, the transfer of large amounts of data is deferred and unnecessary network traffic is exchanged.
However, if we set the tcp_cork on the socket (which can be likened to inserting a "plug" on the pipe), the packet with the header fills a large amount of data, and all the data is automatically transmitted through the packet according to size. When the data transfer is complete, it is best to cancel the Tcp_cork option setting to the connection "unplug the plug" so that any part of the frame can be sent out. This is as important as "plug-in" network connectivity.
All in all, if you are sure to be able to send multiple data sets together (such as the header and body of the HTTP response), we recommend that you set the tcp_cork option so that there is no delay between the data. Can greatly benefit the performance of WWW, FTP, and file servers while simplifying your work.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.