Connection management of Frontend learning HTTP

Source: Internet
Author: User
Tags ack

Previous words

An HTTP connection is a critical channel for HTTP message transmission. To master HTTP, you need to understand the ins and outs of HTTP connections and how to use those connections

If you want to view a Web page, the browser will follow the steps shown when it receives the URL. Separate the server's IP address and port number from the URL, establish a TCP connection to the Web server, send a request message through the connection, then read the response, and finally close the connection

TCP Concepts

Almost all the HTTP traffic in the world is hosted by TCP/IP, and TCP/IP is a commonly used packet-switched network layered protocol set used by computers and network devices worldwide. Client applications can open a TCP/IP connection to a server application that may run anywhere in the world

TCP provides a reliable bit-transport pipeline for HTTP. The bytes that are filled in from one end of the TCP connection are transferred from the other side in their original order and in the correct way. TCP data is sent through a small block of data called an IP packet (or IP datagram), and its secure version of HTTPS inserts a cryptographic layer (called TSL or SSL) between HTTP and TCP.

When HTTP transmits a message, the contents of the message data are transmitted sequentially through an open TCP connection in the form of a stream. After TCP receives the data stream, the data stream is hacked into a small block of data called a segment, and the segment is encapsulated in an IP packet and transmitted over the Internet

At any point in time the computer can have several TCP connections open. TCP is the port number to keep all of these connections running correctly

TCP Performance

HTTP is next to TCP, on its upper level, so HTTP transaction performance depends heavily on the performance of the underlying TCP channel. Once you understand some of the basic performance features of TCP, you can better understand the connection optimization features of HTTP so that you can design HTTP applications that implement some higher performance

HTTP Transaction Time delay

Describes the main connections, transports, and processing delays for HTTP transactions

[Note] Transaction processing time may be short compared to the time it takes to establish a TCP connection and transmit the request and response messages. HTTP latency is made up of TCP network latency unless the client or server is overloaded or is dealing with complex dynamic resources.

There are several main reasons for the latency of HTTP transactions:

1, the client first needs to determine the Web server's IP address and port number based on the URI. If the host name in the URI has not been accessed recently, it may take several 10 seconds for the DNS resolution system to convert the hostname in the URI to an IP address

[note] Most HTTP clients have a small DNS cache that is used to hold the IP addresses of the sites visited recently. If the IP address has been "cached" locally, the inquiry can be completed immediately. Because most web browsers browse a handful of popular sites, it's often possible to quickly parse the host name

2. Next, the client sends a TCP connection request to the server and waits for the server to send back a request to accept the reply. Each new TCP connection will have a connection-settling delay. This value usually has a maximum of two seconds, but if there are hundreds of HTTP transactions, the value is quickly superimposed.

3. Once the connection is established, the client sends the HTTP request through the newly established TCP pipeline. When the data arrives, the Web server reads the request message from the TCP connection and processes the request. Internet transmission request messages, and server processing request messages take time

4. The Web server then echoes the HTTP response, which also takes time. The size of these TCP network delays depends on hardware speed, network and server load, the size of the request and response messages, and the distance between the client and the server. The technical complexity of the TCP protocol can also have a huge impact on latency

TCP Handshake Time delay

When a new TCP connection is established, even before any data is sent, a series of IP packets are exchanged between TCP software to communicate the relevant parameters of the connection. If the connection is used only to transfer a small amount of data, these switching processes can severely degrade HTTP performance

The TCP connection handshake takes several steps:

1. When requesting a new TCP connection, the client sends a small TCP packet (usually 40-60 bytes) to the server. A special SYN tag is set in this group to indicate that this is a connection request

2. If the server accepts the connection, some connection parameters are computed and a TCP packet is sent back to the client, and the SYN and ACK tokens in this group are set to indicate that the connection request has been accepted

3. Finally, the client sends back a confirmation message to the server informing it that the connection has been successfully established. Modern TCP stacks allow clients to send data in this acknowledgment packet

Usually the HTTP transaction does not exchange too much data, at this time, the Syn/syn+ack handshake will produce a measurable delay. The ACK groupings of TCP connections are usually large enough to host the entire HTTP request message, and many HTTP server response messages can be placed in an IP packet

The final result is that a small HTTP transaction may take 50% or more of the time to establish TCP. Therefore, it is beneficial to take effective measures to reduce the effect of TCP settling delay.

Delay Confirmation

TCP implements its own acknowledgement mechanism to ensure the successful transmission of data because the Internet itself cannot ensure reliable packet transfer (the Internet router is overloaded and can discard packets arbitrarily).

Each TCP segment has a sequence number and data integrity checksum. When the recipient of each segment receives a good segment, a small acknowledgment packet is sent back to the sender. If the sender does not receive a confirmation message within the specified window time, the sender considers the grouping to be corrupted or damaged and re-sends the data. Because the acknowledgement message is small, TCP allows the packet of output data destined for the same direction to be "piggyback". TCP combines the returned acknowledgment information with the output data grouping to make more efficient use of the network. Many TCP stacks implement a "delay acknowledgement" algorithm in order to increase the likelihood that the acknowledgement message can be grouped in the same transmit data.

The delay-Acknowledgement algorithm stores the output acknowledgment in a buffer at a specific window a time (usually 100-200ms), looking for a packet of output data that can be taken with it. If no output data is grouped within that time period, the acknowledgment information is sent in a separate grouping

However, HTTP has the Shuangfeng feature of the request-response behavior reduces the possibility of the piggyback information. When you want to return the group in the opposite direction, it is not so much. In general, the delay-acknowledgement algorithm introduces considerable latency. Depending on the operating system you are using, you can adjust or disable the delay acknowledgement algorithm

TCP Slow start

The performance of TCP data transfer also depends on the lifetime of the TCP connection. The TCP connection is self-tuning over time, initially limiting the maximum speed of the connection and, if the data is successfully transmitted, increases the speed of the transfer over time. This tuning is known as the TCP slow start (slow start), which uses dry to prevent sudden overload and congestion of the Internet

TCP Slow start limits the number of packets that a TCP endpoint can transmit at any point in time. In short, each successful receive a group, the sender has the two other groups to send permissions. If an HTTP transaction has a large amount of data to send, it is not possible to send all the groupings at once. You must send a group, wait for confirmation, and then send two groups, each of which must be confirmed, so that you can send four groups, and so on. This approach is called "Open congestion window"

Because of this congestion control feature, the transfer speed of a new connection is slower than a "tuned" connection that has exchanged a certain amount of data. Because tuned connections are faster, there are tools in HTTP that can reuse existing connections, such as the HTTP "persistent connection" that you'll describe later

Nagle algorithm

TCP has a data flow interface that allows applications to put arbitrary-sized data into the TCP stack-even if only one byte at a time. However, at least 40 bytes of tags and headers are loaded in each TCP segment, so if TCP sends a large number of packets containing a small amount of data, the performance of the network is severely degraded

[note] The behavior of sending a large number of single-byte groupings is called "send-side silly window syndrome". This behavior is inefficient, violates social ethics, and may affect other Internet traffic

The Nagle algorithm (named after its inventor, John Nagle) attempted to bind a large amount of TCP data together before sending a packet to improve network efficiency. The Nagle algorithm encourages the sending of full-size segments (the largest size grouping on the LAN is about 1500 bytes, which is hundreds of bytes on the Internet). The Nagle algorithm allows the sending of non-full-size groupings only after all other groupings have been confirmed. If other groupings are still in transit, the portion of the data is cached. Cached data is sent only if the pending packet is confirmed, or if the cache accumulates enough data to send a full-size grouping

The Nagle algorithm raises several HTTP performance issues. First, a small HTTP message may not fill a packet, and may cause delays in waiting for additional data that will never come. Second, there is a problem with the interaction between the Nagle algorithm and the delay acknowledgement--nagle algorithm will prevent the data from being sent until a confirmed packet arrives, but the acknowledgment packet itself will be delayed by the delay acknowledgment algorithm 100--200ms

HTTP applications often set parameter Tcp_nodelay in their own stacks, disable the Nagle algorithm, and raise performance. If you want to do this, make sure that chunks of data are written to TCP, so that there is not a bunch of small groups

Time_wait Cumulative

Time_wait Port exhaustion is a serious performance issue that affects performance benchmarks, but is relatively rare in reality. Most people who encounter performance benchmarks will eventually encounter this problem, and performance will become unexpectedly bad, so this issue deserves special attention

When a TCP endpoint shuts down a TCP connection, a small control block is maintained in memory to record the IP address and port number of the recently closed connection. This type of information is only maintained for a short period of time, typically twice times the estimated maximum segment usage (called 2MSL, typically 2 minutes), to ensure that a new connection with the same address and port number is not created during this time period. In fact, this algorithm prevents the creation, shutdown, and re-creation of two connections with the same IP address and port number within two minutes

Now the use of high-speed routers makes repeating groupings almost impossible to appear on the server after a few minutes of connection shutdown. Some operating systems will set the 2MSL to a smaller value, but be cautious when exceeding this value. The groupings are indeed replicated, and if a new TCP stream with the same connection value is inserted into the replication group from the previous connection, the TCP data is destroyed

2MSL Connection shutdown Latency is usually not a problem, but it can be a problem in a performance baseline environment. When performing benchmarks, only one or several computers used to generate traffic are connected to a system, which limits the number of client IP addresses connected to the server. Also, the server typically listens on the default TCP port 80 for HTTP

These conditions also limit the combination of available connection values when using time_wait to prevent port numbers from being reused. Each time the client connects to the server, it obtains a new source port for the connection to be unique. However, because the number of available source ports is the most limited (for example, 60,000), and in 2MSL seconds (for example, 120 seconds) The connection is not reusable, the connection string is limited to 60000/120=500 times/second. If you continue to optimize and the server's connection rate is no higher than 500 times/second, you can ensure that you do not experience the time_wait port exhaustion issue. To fix this problem, you can increase the number of client load generation machines, or make sure that clients and servers recycle several virtual IP addresses to add more connection combinations

Even if you do not experience port exhaustion, be particularly careful when there is a large number of connections open, or if a large number of control blocks are allocated for a pending connection. Some operating systems can slow down significantly when there is a large number of open connections or control blocks

Serial connection

HTTP allows the existence of a string of HTTP intermediate entities (proxies, caches, etc.) between the client and the final source server. The HTTP messages can be forwarded to the source server (or reverse) by jumping over these intermediate devices, starting from the client side.

In some cases, two neighboring HTTP applications apply a set of options for the connection they share. The Connection header field in HTTP has a comma-delimited list of connection labels that specify options that do not propagate to other connections for this connection. For example, you can use Connection:close to describe the connection that must be closed after the next message is sent

Connection header can host 3 different types of Tags: 1, HTTP header field name, lists only the header related to this connection, 2, any tag value, used to describe the non-standard options for this connection, 3, the value of close, indicating that the operation should be closed after the persistent connection

If the connection label contains the name of an HTTP header field, the Header field contains information about the connection and cannot be forwarded. All header fields listed in the connection header must be deleted before the message is forwarded out. Because the connection header prevents unintentional forwarding of the local header, placing the header name in the connection header is referred to as "protection of the header"

When an HTTP application receives a message with the connection header, the receiving end resolves all the options for the send-side request and applies it. It then removes all headers listed in the connection header and connection before forwarding the message to the next hop address. Also, there may be a small number of skip headers that are not listed as connection header values, but must not be forwarded by the agent. These include Proxy-authenticate, Proxy-connection, transfer-encoding and upgrade

TCP performance delays may be superimposed if only simple management of the connection is made. For example, suppose you have a Web page that contains 3 embedded images. The browser needs to initiate 4 HTTP transactions to display this page: an HTML page for the top level, and 3 images for embedding. If each transaction requires (serially) a new connection, then the connection delay and slow start delay are superimposed.

In addition to the actual delay introduced by the serial loading, when loading a picture, no movement elsewhere on the page can make people feel very slow. Users prefer to be able to load multiple images at the same time

Another disadvantage of serial loading is that some browsers do not know the size of objects until they are loaded, and they may need dimensional information to decide where to place the object on the screen, so nothing can be displayed on the screen until enough objects are loaded. In this case, it may be normal for the browser to load the object serially, but the user is facing a blank screen, unaware of the progress of the load

There are several existing and emerging methods to raise HTTP connection performance: 1, parallel connection, through a number of TCP connections to initiate concurrent HTTP requests, 2, persistent connection, the reuse of TCP connections to eliminate connection and shutdown delay; 3. pipelined connections, initiating concurrent HTTP requests through a shared TCP connection; 4 , multiplexing connections, alternating transfer requests and response messages (experimental phase). The next sections are described in turn

Parallel connections

The browser can completely request the original HTML page, then request the first embedded object, and then request the second embedded object, and so on, in this simple way to each embedded object serial processing, but it is too slow

HTTP allows clients to open multiple connections and execute multiple HTTP transactions in parallel. Four embedded images are loaded in parallel with each transaction having its own TCP connection

A combined page containing embedded objects if you can overcome the no-load time and bandwidth limitations of a single connection through a parallel connection, the loading speed will be increased. The delay can overlap, and if a single connection does not take full advantage of the client's network bandwidth, the unused bandwidth allocation can be used to load other objects

, the enclosing HTML page is loaded first, and then the remaining 3 transactions are processed in parallel, with each transaction having its own connection. The loading of the picture is parallel, and the delay of the connection is overlapping.

[note] Due to the presence of software overhead, there is always a small delay between each connection request, but the connection request and transmission time are basically overlapping

Even though the speed of a parallel connection may be faster, it is not always faster. When the client's network bandwidth is low (for example, the browser is connected to the Internet via a 28.8kbps modem), most of the time may be used to transmit data. In this case, an HTTP transaction that is connected to a faster server can easily drain all available modem bandwidth. If multiple objects are loaded in parallel, each object will compete for this limited bandwidth, and each object will be scaled at a slower rate, resulting in little performance gains or even no elevation

[note] In fact, multiple connections incur some additional overhead, and the time required to load an entire page using a parallel connection is likely to be longer than the serial download time

Also, opening a large number of connections consumes a lot of memory resources, which can lead to performance problems. Complex web pages can have dozens of or hundreds of inline objects. The client may be able to open hundreds of connections, but the Web server typically handles requests from many other users at the same time, so few Web servers expect this to happen. 100 users simultaneously issued a request, each user opens 100 connections, the server is responsible for processing 10,000 connections. This can cause a severe degradation in server performance. The same is true for high-load agents.

In fact, browsers do use parallel connections, but they limit the total number of parallel connections to a smaller value (usually 4). Server can turn off excessive connections from specific clients at will

Parallel connections do not always allow pages to load faster. Even if they do not speed up the transfer of the page, the parallel connection usually makes the page load faster because the user can see the progress of the load as multiple component objects appear on the screen at the same time. If there's a lot of action going on across the screen, even if the stopwatch actually shows that the entire page is downloading longer, people will think the Web page is loading faster.

Persistent connections

Web clients often open connections to the same site. For example, most inline images on a Web page usually come from the same Web site, and a significant portion of hyperlinks to other objects usually point to the same site. Therefore, an application that initializes an HTTP request to a server is likely to initiate more requests to that server in the near future (for example, to get an online picture). This property is called site locality locality

Therefore, http/1.1 (and the various enhanced versions of http/1.0) allow the HTTP device to keep the TCP connection open after the transaction has ended in order to reuse the existing connection for future HTTP requests. A TCP connection that remains open after the transaction has ended is called a persistent connection. Non-persistent connections are closed after the end of each transaction. Persistent connections remain open between different transactions until the client or server decides to close them

The slow connection establishment phase can be avoided by reusing idle persistent connections that have been opened to the target server. Also, an open connection avoids the slow-start congestion adaptation phase for faster data transfer

Parallel connections can be used to raise the transmission speed of a composite page. However, the parallel connection also has the following disadvantages: 1, each transaction will open/close a new connection, will consume time and bandwidth, 2, due to the existence of TCP slow start feature, each new connection performance will be reduced; 3, the number of parallel connections that can be opened is actually limited

Persistent connections have some better places than parallel connections. Persistent connections reduce the overhead of delay and connection setup, keep the connection in a tuned state, and reduce the potential number of open connections. However, you should be careful when managing persistent connections, or you will accumulate a lot of idle connections, consuming local and remote client and server resources

Using persistent connections in conjunction with parallel connections can be the most efficient way. Many Web applications now open a small number of parallel connections, each of which is a persistent connection. There are two types of persistent connections: older http/1.0+ "keep-alive" connections, and modern http/1.1 "persistent" connections

Keep-alive

Since about 1996, many http/1.0 browsers and servers have been extended to support an early experimental persistent connection called keep-alive connectivity. These early persistent connections were plagued by some interoperability design issues that were fixed in later versions of http/1.1, but many clients and servers still use these early keep-alive connections

A time line that implements 4 HTTP transactions on a serial connection is compared to the timeline required to implement the same transaction on a persistent connection, and the timelines are reduced due to the overhead of connecting and closing connections

Keep-alive is no longer used, and there is no explanation for it in the current http/1.1 specification. But browser and server use of keep-alive handshake is still quite extensive

A client that implements a http/1.0 keep-alive connection can keep a connection open by including a connection:keep-alive header request. If the server is willing to keep the connection open for the next request, it contains the same header in the response. If there is no connection:keep-alive header in the response, the client considers that the server does not support keep-alive and closes the connection after the response message is sent back

Note that the Keep-alive header only requests that the connection remain active. After a keep-alive request is made, the client and server do not necessarily agree to a keep-alive session. They can shut down idle keep-alive connections at any time, and can arbitrarily limit the number of transactions processed by the Keep-alive connection

Keep-alive settings

You can adjust the behavior of keep-alive with the comma-delimited option specified in the Keep-alive generic header:

The parameter timeout is sent in the keep-alive response header. It estimates how long the server wants to keep the connection active

The parameter max is sent in the keep-alive response header. It estimates how many transactions the server also wants to maintain the active state of this connection

The keep-alive header also supports arbitrary unprocessed properties, which are primarily used for diagnostics and debugging. syntax is name [=value]

The keep-alive header is completely optional, but it can only be used when the connection:keep-alive is provided

The following is an example of the keep-alive response header, which shows that the server will keep the connection open for a maximum of 5 other transactions, or leave the open state until the connection has been idle for 2 minutes

connection:keep-alivekeep-alive:max=5, timeout=

There are some limitations and some areas of clarification when using the Keep-alive connection

In http/1.0, keep-alive is not used by default. The client must send a connection:keep-alive request header to activate the keep-alive connection

The connection:keep-alive header must be sent along with all messages wishing to maintain a persistent connection. If the client does not send the connection:keep-alive header, the server closes the connection after that request

By detecting whether the Connection:keep-alive response header is included in the response, the client can tell if the server will close the connection after the response is issued

The connection can be kept open only if it is not necessary to detect the closure of the connection to determine the length of the body of the message entity-that is, the body part of the entity must have the correct content-length, a multipart media type, or encoded in a chunked transfer encoding. It is bad to echo the wrong concent-length in a keep-alive channel. In this case, the other end of the transaction cannot accurately detect the end of a message and the start of another message.

The proxy and gateway must execute the connection header rules. The proxy or gateway must delete all header fields named in the connection header and the connection header itself before forwarding or caching the message.

Strictly speaking, you should not establish an keep-alive connection with a proxy server that cannot determine if the connection header is supported, to prevent the dumb proxy problem that is described below. This is not always possible in practical applications.

Technically, all Connection header fields (including connection:keep-alive) from http/1.0 devices should be ignored because they may have been mistakenly forwarded by older proxy servers. But in fact, while there may be a danger of hanging on old proxies, some clients and servers will violate this rule

The client must be prepared to retry the request unless the request is repeatedly sent with some other side effects, or if the connection is closed before the client receives the full response.

Dumb Agent

The connection:keep-alive header of the Web client should only have an impact on the TCP link that leaves the client, which is why it is referred to as the "connection" header. If the client is talking to a Web server, the client can send a connection:keep-alive header to tell the server that it wants to keep the connection active. If the server supports Keep-alive, a connection:keep-alive header is echoed, otherwise it will not be echoed

The problem is with proxies-especially those that do not understand the connection header and do not know that the header should be removed before it is sent along the forwarding link. Many old or simple proxies are blind trunks (blind relay), which simply forwards bytes from one connection to another, without special handling of the connection header

Suppose a Web client is talking to a Web server through a dumb proxy that is used as a blind relay, as shown in this case

The Web client sends a message to the agent that contains the connection:keep-alive header and, if possible, the request to establish a keep-alive connection. The client waits for a response to determine whether the other party is endorsing its request to the Keep-alive channel

The dummy agent received the HTTP request, but it did not understand the connection header (just treat it as an extension header). The agent does not know what keep-alive means, so just follow the forwarding link to send the message to the server without missing a word. But the connection header is a hop-down header that applies only to a single transmission link and should not be transported along the transport chain. And then something really bad is going to happen.

The relayed HTTP request arrives at the Web server. When the Web server receives the Connection:keep-alive header that is forwarded by proxy, it mistakenly thinks that the proxy (which looks like all other clients for the server) wants to make a keep-alive conversation. The Web server agreed to a keep-alive conversation and echoed a connection:keep-alive response header. So, at this point, the Web server thinks it is keep-alive a conversation with the agent and follows the Keep-alive rules. But the agent is ignorant of keep-alive.

The dummy agent sends the response message of the Web server back to the client and passes the Connection:keep-alive header from the Web server together. When the client sees this header, it will assume that the agent agrees to a keep-alive conversation. So, at this point both the client and the server think that they are keep-alive conversations, but the agent who is talking to them is ignorant of keep-alive.

Because the agent knows nothing about keep-alive, all the data received is sent back to the client and then waits for the source-side server to close the connection. However, the source-side server will assume that the proxy has explicitly requested it to keep the connection open, so it will not close the connection. This way, the agent hangs there waiting for the connection to close

When the client receives the ECHO response message, it immediately turns to the next request and sends another request to the agent on the keep-alive connection. And the agent does not think that there will be other requests on the same connection, the request is ignored, the browser is here in circles, there will be no progress.

This wrong way of communicating causes the browser to stay in a suspended state until the client or server times out the connection and shuts it down

In order to avoid such agent communication problems, modern agents must not forward the connection header and all names appearing in the connection value header. Therefore, if an agent receives a connection:keep-alive header, it is not supposed to forward the connection header, or all of the headers named keep-alive

In addition, there are several headers that cannot be listed as connection header values, are not forwarded by proxies, or used as cache responses. These include Proxy-authenticate, Proxy-connection, transfer-encoding and upgrade

Proxy-connection

Netscape's browser and proxy implementations have proposed a workaround for the blind relay problem, which does not require all Web applications to support high-version HTTP. This workaround introduces a new header called proxy-connection, which resolves the problem that follows a blind relay behind the client-but does not address all other situations. In the case of an explicitly configured proxy, modern browsers implement Proxy-connection, and many agents can understand it

The problem is that the dumb agent blindly forwards the connection:keep-alive and the like jump header to cause trouble. The skip header is only related to a specific connection and cannot be forwarded. When the downstream server mistakenly interprets the forwarded header as a request from the agent itself, it can cause problems when it controls its own connection.

In Netscape's workaround, the browser sends a nonstandard proxy-connection extension header to the agent, rather than the officially supported connection header. If the proxy is a blind relay, it forwards the meaningless proxy-connection header to the Web server, which ignores the header and does not cause any problems. But if the agent is a smart agent (capable of understanding the handshake action of a persistent connection), replace the meaningless proxy-connection header with a connection header and send it to the server to receive the desired effect

This scenario can be used to resolve a problem when there is only one agent between the client and the server. But if there is a smart proxy on either side of the dumb proxy, the problem will be exposed again.

Also, the presence of "invisible" proxies in the network is now commonplace, and these proxies can be firewalls, blocking caches, or accelerators for reverse proxy servers. These devices are not visible to browsers, so browsers do not send proxy-connection headers to them. Transparent Web applications are important to properly implement persistent connections

Persistent connection

http/1.1 gradually discontinued support for the keep-alive connection, replacing it with an improved design called persistent connection (persistent connection). The purpose of the persistent connection is the same as the keep-alive connection, but the working mechanism is better

Unlike the http/1.0+ keep-alive connection, the http/1.1 persistent connection is activated by default. Unless otherwise specified, http/1.1 assumes that all connections are persistent. To close the connection after the end of the transaction, the http/1.1 application must explicitly add a Connection:close header to the message. This is an important difference from the previous version of HTTP protocol, in which the keep-alive connection is either optional or not supported at all

The http/1.1 client assumes that the http/1.1 connection remains open until the response is received, unless the Connection:close header is included in the response. However, the client and the server can still turn off idle connections at any time. Not sending Connection:close does not mean that the server is committed to keeping the connection open forever

There are the following limitations and issues to be clarified in the use of persistent connections:

After sending the Connection:close request header, the client will not be able to send more requests on that connection.

If the client does not want to send another request on the connection, it should send a Connection:close request header in the last request

Only if all the messages on the connection have the correct, custom message length-that is, the length of the entity body part is consistent with the corresponding content-length, or encoded by the chunked transmission encoding-the connection can persist

The http/1.1 agent must be able to manage persistent connections to both the client and the server separately-each persistent connection is only available for one-hop transmissions

Because older proxies forward the connection header, the http/1.1 Proxy server should not establish a persistent connection with the http/1.0 client unless they understand the processing power of the client. In fact, this is very difficult to do, many manufacturers have violated this principle

Although the server should not attempt to close the connection during the transmission of the message, and should respond to at least one request before closing the connection, the http/1.1 device can close the connection at any time, regardless of the value taken by the connection header

The http/1.1 application must be able to recover from an asynchronous shutdown. The client should retry the request as long as there are no side effects that might accumulate. The client must re-initiate the request unless the request is repeatedly initiated with a side effect, or if the connection is closed before the client receives the entire response

A user client can maintain up to two persistent connections to any server or agent to prevent the server from overloading. The agent may need more connections to the server to support concurrent user traffic, so if there are N users attempting to access the server, the agent will maintain a maximum of 2N connections to any server or parent agent

pipelined connections

http/1.1 allows the request pipeline to be used optionally on persistent connections. This is a further performance optimization on the keep-alive connection. You can put multiple requests into the queue before the response arrives. When the first request goes over the network to the server, the second and third requests can also start sending. In high-latency network conditions, this can reduce the network loopback time, improve performance

There are several limitations to piping connections:

If the HTTP client cannot confirm that the connection is persistent, the pipe should not be used

The HTTP response must be echoed in the same order as the request. There is no serial number label in the HTTP message, so if the received response is out of sequence, there is no way to match it to the request.

The HTTP client must be ready to close the connection at any time, and be prepared to re-send all outstanding pipelined clearances. If the client opens a persistent connection and immediately makes 10 requests, the server may close the connection after only 5 requests have been processed. The remaining 5 requests will fail,
Clients must be able to respond to these prematurely closed connections and re-issue these requests

HTTP clients should not be piped to send requests that have side effects (such as post). In short, when it goes wrong, pipelining prevents the client from knowing which of the servers is performing a series of pipelined requests. A non-idempotent request that cannot be safely retried (idempotent means that multiple requests return the same result), so there is a risk that some methods will never be executed when an error occurs

Close connection

All HTTP clients, servers, or proxies can shut down a TCP transport connection at any time. The connection is usually closed at the end of a message, but when something goes wrong, it may also close the connection in the middle of the first row, or in other strange places

This is a common scenario for pipelined persistent connections. The HTTP application can close the persistent connection after any period of time. For example, after a persistent connection has been idle for some time, the server may decide to close it

However, the server will never be able to determine when it closes the "idle" connection at the moment, and the client at the end of the line has no data to send. If this occurs, the client will find a connection error while writing the half request message

Content-length

Each HTTP response should have an exact content-length header that describes the size of the response body. Some older HTTP servers omit the Content-length header, or contain an incorrect length indication, which relies on the server's connection shutdown to indicate the true end of the data

When a client or agent receives an HTTP response that ends with a connection shutdown and the actual transferred entity length does not match content-length (or no content-length), the receiver should question the correctness of the length

If the receiving end is a cache proxy, the receiver should not cache this response to reduce the likelihood of future mixing of potential error messages. The agent should forward the problematic message intact, instead of trying to "calibrate" the content-length to maintain semantic transparency

Power-like nature

The connection can be closed at any time, even in non-error situations. HTTP applications are prepared to handle unexpected shutdowns correctly. If the transport connection is closed while the client is performing the transaction, the client should reopen the connection and try again unless the transaction causes some side effects.

This situation is more serious for plumbing connections. A client can queue a large number of requests into a queue, but the source server can close the connection, leaving a large number of outstanding requests, which need to be re-dispatched

Side effects are a very important issue. If, after sending some request data, the connection is closed before the returned result is received, the client is unable to determine exactly how many transactions are actually activated on the server side. Some transactions, such as Get a static HTML page, can be executed repeatedly, and there will be no changes. Other transactions, such as post an order to an online bookstore, cannot be repeated, or there will be a risk of placing multiple orders.

If a transaction, whether executed once or many times, gets the same result, the transaction is idempotent. Implementations can assume that the, PUT, DELETE, Trace, and Options methods share this feature. Clients should not route non-idempotent requests (such as Post) in a pipelined manner. Otherwise, premature termination of a transport connection can result in some indeterminate consequences. To send a non-idempotent request, you need to wait for the response state from the previous clear

Although the user agent agent may let the operator choose whether to retry the request, it must not automatically retry a non-idempotent method or sequence. For example, most browsers provide a dialog box when overloading a cached post response, asking the user if they want to initiate the transaction again

Normal close connection

The TCP connection is bidirectional. Each end of a TCP connection has an input queue and an output queue for reading or writing data. Data placed in one output queue will eventually appear in the input queue on the other end

1. Completely off and half closed

The application can shut down either the TCP input and output channels, or both. The socket call Close () shuts down both the input and output channels of the TCP connection. This is referred to as "shut down completely." You can also use sockets to call shutdown () to close the input or output channels separately. This is called "half-closed."

2. TCP Shutdown and reset error

A simple HTTP application can only be used completely off. But when an application starts a conversation with many other types of HTTP clients, servers, and proxies and starts using a pipelined persistent connection, it becomes important to use a semi-shutdown to prevent peer entities from receiving unintended write errors.

In summary, it is always safe to close the output channel of the connection. The peer entity connecting to the other end receives a notification after all data has been read from its buffer, stating that the stream is over, so that it knows that you are closing the connection

It is dangerous to close the connected input channel unless you know that the other end is not going to send any more data. If the other end sends data to the input channel that you have closed, the operating system will send a message to the machine on the other end with a TCP "connection to end multiplicity". Most operating systems handle this as a serious error, removing all cached data that has not yet been read by the peer. This is a very bad thing for plumbing connections.

For example, a 10-pipe request has been sent on a persistent connection, the response has been received, and is being stored in the operating system's buffer, but the application has not read it out yet. Now, suppose the 11th request was sent, but the server thought it was long enough to use the connection and decided to close it. Then the 11th request is sent to a closed connection, and a reset message is echoed, and the reset message empties the input buffer

When the data is finally going to be read, a connection is made to the end multiplicity error, the cached unread response data is lost, although most of them have successfully arrived at the machine

3. Normal shutdown

The HTTP specification suggests that when a client or server suddenly closes a connection, it should "gracefully shut down the transport connection", but it does not indicate how it should be done

In summary, applications that implement graceful shutdown should first close their output channels, and then wait for the peer entity on the other end to close its output channel. When both sides tell each other that they will not send any more data, the connection will be completely shut down without the risk of resetting

Unfortunately, it is not possible to ensure that the peer entity is semi-closed or checked. Therefore, an application that wants to gracefully shut down the connection should first partially close its output channel and then periodically check the state of its input channel (find data, or end of stream). If the input channel is not turned off at the end of a certain time interval, the application can force the connection to be closed to conserve resources

Connection management of Frontend learning HTTP

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.