One HTTP request, who will disconnect the TCP connection first? What happens when the client breaks first, and when does the service end first break?

Source: Internet
Author: User

We have 2 internal HTTP services (NGINX):

201: The service deployed by this server is account.api.91160.com, this service is for front-end page call;

202: The service deployed by this server is hdbs.api.91160.com, this service is for front-end page call;

Recently found that the network connection of the 2 servers, the number of time_wait difference is very large, 201 of the time_wait probably 20000+,202 time_wait about 1000, the Gap 20 times times, 2 units of the same amount of requests, are the above internal calls of the connection, And there is no difference in service mode, why is the number of connections so much worse?

After the reason: because the 2 modules of the calling program written by different teams, the call method is not the same, resulting in a caller (client, PHP program) active disconnection, one is the callee (service side 201, 202) active disconnection, because the time_wait generated in the active disconnection party, Therefore, a server time_wait a high number, a low number of time_wait;

There is a detail, one HTTP request, who will disconnect the TCP connection first? What happens when the client breaks first, and when does the service end first break?

After Baidu, find the reason, mainly http1.0 and http1.1 to maintain the difference between the connection and HTTP head in connection, Content-length, transfer-encoding and other parameters;

The following contents are reproduced: (http://blog.csdn.net/wangpengqi/article/details/17245349)

Of course, in Nginx, for http1.0 and http1.1 is also supported long connection. What is a long connection? We know that the HTTP request is based on the TCP protocol, then, when the client before initiating the request, you need to establish a TCP connection with the server, and each time the TCP connection is required three handshake to determine, if the client and the server between the network is close, the three times of interaction consumption will be more, And three interactions can also bring network traffic. Of course, when the connection is broken, there will be four interactions, of course, the user experience is not important. and the HTTP request is the request answer, if we can know the length of each request header and the response body, then we can execute multiple requests on one connection, this is called long connection, but the precondition is that we must first determine the length of the request header and the response body. For the request, if the current request requires a body, such as a POST request, then Nginx needs the client to specify Content-length in the request header to indicate the size of the body, otherwise 400 error is returned. In other words, the length of the request body is determined, then the length of the response body? Let's take a look at the determination of the response body length in the HTTP protocol:
1. For the http1.0 protocol, if the response header has content-length head, then the length of the content-length can know the length of the body, the client receives the body, it can follow this length to receive data, after receiving, it indicates that the request was completed. If there is no Content-length header, the client will always receive data until the server is actively disconnected, only to indicate that the body has been received.
2. For the http1.1 protocol, if the transfer-encoding in the response header is a chunked transmission, the body is a streaming output, the body is divided into blocks, and the beginning of each block identifies the length of the current block, at which point the body does not need to be specified by length. If a non-chunked transmission, and there is content-length, the data is received according to Content-length. Otherwise, if it is non-chunked and there is no content-length, the client receives the data until the server actively disconnects.

From above, we can see, except http1.0 without content-length and http1.1 non-chunked without content-length, body length is knowable. At this point, when the server finishes outputting the body, it is possible to consider using a long connection. The ability to use long connections is also conditional. If connection in the client's request header is close, the client needs to turn off the long connection, and if it is keep-alive, the client needs to open a long connection, if the client's request does not connection the header, then according to the Protocol, If it is http1.0, the default is close, and if it is http1.1, the default is keep-alive. If the result is keepalive, after the output of the response body, Nginx sets the KeepAlive property of the current connection, and then waits for the client next request. Of course, Nginx can not wait all the time, if the client has not sent data over, will not always occupy this connection? So when Nginx sets the keepalive to wait for the next request, it also sets a maximum wait time, which is configured by the option Keepalive_timeout, and if configured to 0, turns off KeepAlive, at which time Whether the HTTP version is 1.1 or 1.0, the client's connection, whether close or keepalive, is forced to close.

If the final decision of the server is keepalive open, then in the HTTP header of the response, it will also contain the connection header field, whose value is "keep-alive" or "Close". If the connection value is close, the connection is actively switched off after Nginx responds to the data. Therefore, for the larger request volume Nginx, turn off keepalive will eventually produce a more time-wait state of the socket. Generally speaking, when a client accesses the same server multiple times, the advantage of opening keepalive is very large, compared to the slice server, usually a webpage will contain many pictures. Opening keepalive will also reduce the number of time-wait in large quantities.

Summary: (not considering keepalive)

http1.0

With content-length,body length It is known that the client can accept the data according to this length when receiving the body. Once accepted, it means that the request has been completed. The client actively calls close to wave four times.

Without content-length, the body length is not known, the client has been receiving data until the server is actively disconnected

http1.1

With content-length body length, the client is actively disconnected

With transfer-encoding:chunked body will be divided into multiple blocks, the beginning of each block will identify the length of the current block, the body does not need to be specified by Content-length. But you can still know the length of the body. Client Active Disconnection

Receive data without transfer-encoding:chunked and without content-length client until the server is actively disconnected.

That is, if there is a way to know the length of the server, it is the client first disconnect. Receive data if you don't know it. Know the service side is disconnected.

One HTTP request, who will disconnect the TCP connection first? What happens when the client breaks first, and when does the service end first break?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.