Client,server,nginx in the use of keepalive to be consistent, otherwise it will not be effective

Source: Internet
Author: User
Tags nginx server

Why should there be keepalive?

Before we talk about KeepAlive, we first understand the simple TCP knowledge (knowledge is very simple, master directly ignore). The first thing to be clear is that there is no "request" in the TCP layer, and it is often wrong to hear a request being sent at the TCP layer.

TCP is a means of communication, and the word "request" is a transactional concept, and the HTTP protocol is a transactional protocol, and if you send an HTTP request, there is no problem with that statement. Also often hear the interviewer feedback some of the interview operation, the basic concept of TCP three handshake is unclear, the interviewer asked how TCP is to establish a link, the interviewer came up and said, if I am a client I send a request to the server, the server sent a request to me ...


The TCP layer is a concept that does not have a request, the HTTP protocol is the concept of a transactional protocol that has the request, and the TCP message hosts the HTTP protocol requests (request) and response (Response).

Now is the beginning to explain why there is keepalive. After the link is established, if the application or the upper layer protocol has not sent data, or a long time to send the data, when the link is long no data message transfer how to determine the other side is online, in the end is dropped or really no data transmission, links need not be maintained, This is a scenario that needs to be considered in the TCP protocol design.

TCP protocol through a clever way to solve this problem, when more than a period of time, TCP automatically send a message to the other side, if the other party responded to this message, indicating that the other side is also online, the link can continue to maintain, if the other party did not return the message, and retry the link is considered to be lost, There is no need to keep the link.

How do I turn on keepalive?

KeepAlive is not turned on by default, and there is no global option on the Linux system to turn on TCP keepalive. Applications that need to open keepalive must be opened separately in the TCP socket. Linux kernel has three options that affect the behavior of keepalive:

1.NET.IPV4.TCPKEEPALIVEINTVL = 75
2.net.ipv4.tcpkeepaliveprobes = 9
3.net.ipv4.tcpkeepalivetime = 7200

The unit of the Tcpkeepalivetime is the second, which indicates the number of seconds after which the TCP link has no data message transmission to initiate the probe packet; The TCPKEEPALIVEINTVL unit is also the second, indicating the time interval between the previous probe packet and the last probe packet, and Tcpkeepaliveprobes indicates the number of probes.

The TCP socket also has three options and a kernel counterpart, which can be set for a separate socket via setsockopt system call:

TCPKEEPCNT: Cover Tcpkeepaliveprobes
Tcpkeepidle: Cover Tcpkeepalivetime
TCPKEEPINTVL: Cover TCPKEEPALIVE_INTVL

For example, with my system default setting as an example, the kernel default setting of Tcpkeepalivetime is 7200s, if I open keepalive for the socket in the application, and then set the Tcp_keepidle to 60, Then the TCP protocol stack sends the first probe message when it discovers that the TCP link is idle 60s without data transfer.

is the TCP keepalive and HTTP keep-alive the same?

It is estimated that a lot of people at first glance at this problem to find that in fact often said KeepAlive is not so, in fact, not specifically TCP or HTTP layer keepalive, can not be confused. The keepalive of TCP and the keep-alive of HTTP are completely different concepts.

The keepalive of the TCP layer has been explained above. What is the concept of HTTP layer keep-alive? When talking about the TCP link establishment, I drew a three-time handshake, TCP after the link is established, the HTTP protocol uses TCP to transmit the HTTP protocol requests (request) and response (Response) data, a complete HTTP transaction such as:

This diagram simplifies HTTP (Req) and HTTP (RESP), and the actual requests and responses require multiple TCP messages.

Can find a complete HTTP transaction, there are links to establish, request send, response to receive, disconnect the four processes, the early transmission of data through the HTTP protocol text-based, a request may be all the data to be returned to, but now to show a full page requires a lot of requests to complete , tablets, js,css, and so on, if every HTTP request needs to be created and disconnected by a TCP, this overhead is completely unnecessary.

After the HTTP keep-alive is enabled, the existing TCP link can be reused, the current request has been answered, the server does not immediately close the TCP link, but wait for a period of time to receive the browser side may send a second request, Usually the browser sends a second request immediately after the first request is returned, and if there is only one link at a time, the more requests are processed by the same TCP link, the more the TCP setup and shutdown that can be saved by turning on keepalive is consumed.

Of course, multiple links are usually enabled to request resources from the server, but after Keep-alive is turned on, it can still speed up the loading of resources. After http/1.1, Keep-alive is turned on by default, and the connection option is added in the header field of HTTP. When set to Connection:keep-alive indicates on, set to Connection:close indicates off. In fact, the HTTP keepalive notation is keep-alive, and the TCP keepalive notation is also different. So TCP keepalive and HTTP keep-alive are not the same thing.

How is the Nginx TCP keepalive set?

The beginning refers to my recent problems, the client sends a request to the Nginx server, the server needs a period of time before the calculation will return, more than the length of the LVS session to maintain the 90s, on the server using Tcpdump grab packet, The results shown locally through the Wireshark analysis show that the timestamp between the 5th message and the last message is about 90s, as shown in the second picture.

After determining if the session of the LVS was due to expire, I started to find out how the Nginx TCP KeepAlive was set, and the first option I found was keepalivetimeout, Learned from colleagues that the usage of keepalivetimeout is when the value of KeepAliveTimeout is 0 to close keepalive, and when the value of KeepAliveTimeout is a positive integer value, the link remains in the number of seconds. The KeepAliveTimeout is then set to 75s, but the actual test results show that it does not take effect.

Obviously keepalivetimeout can not solve the TCP level of the keepalive problem, in fact, Nginx involves keepalive options are many, nginx commonly used in the following ways:


From the TCP level Nginx not only and client care KeepAlive, but also and upstream care keepalive, at the same time from the HTTP protocol level, Nginx needs and client care keep-alive, If upstream use the HTTP protocol, but also to care about and upstream keep-alive, in short, it is more complex.

So after figuring out the TCP layer's keepalive and HTTP keep-alive, it won't set the wrong keepalive for Nginx. I was not sure when I solved the problem. Nginx has the option to configure TCP keepalive, so I open the Ngnix source code,
In the source code search tcp_keepidle, the relevant code is as follows:

From the context of the code I found that TCP keepalive could be configured, so I went on to find out which option to configure, and finally found that the so_keepalive option of the Listen directive could be configured to keepalive the TCP socket.

Http://www.bubuko.com/infodetail-260176.html




KeepAlive, this time look at Nginx plus tomcat to do reverse proxy this typical scenario of the KeepAlive configuration or not effect. In the data can only be a qualitative reference, in particular, according to the actual business testing.

    • Obviously KeepAlive needs the client and the sever simultaneously to be effective;
    • Unused keepalive (either client or server), the server will actively close the TCP connection, there are a large number of Time_wai;
    • Whether the use of keepalive is more complex, and is not purely an HTTP header decision;
    • Nginx seems to maintain a long connection pool with upstream, so very few will see TIME_WAIT, are in established state.

The configuration of Nginx about KeepAlive has two places:

One is the keepalive_timeout under the HTTP node, this setting is the connection timeout time with the client (downstream in figure), and one is the keepalive configured in upstream, note that this unit is not the time.

Client,server,nginx in the use of keepalive to be consistent, otherwise it will not be effective

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.