HTTP long connections and short connections

Source: Internet
Author: User
Tags flush glassfish keep alive stock prices
HTTP implementation Long Connection

HTTP is stateless
That is, the browser and the server make a connection once for each HTTP operation, but the connection is interrupted at the end of the task. If the client browser accesses an HTML or other type of Web page that contains other Web resources, such as JavaScript files, image files, CSS files, and so on, when the browser encounters such a Web resource, an HTTP session is established

The biggest difference between HTTP1.1 and HTTP1.0 is the addition of persistent connection support (which appears to be the latest http1.0 the specified keep-alive), but stateless or untrustworthy.

If the browser or server adds this line of code to its header information

connection:keep-alive

The TCP connection remains open after it is sent, so the browser can continue to send requests through the same connection. Keeping the connection saves the time it takes to establish a new connection for each request, and saves bandwidth.

A long connection is implemented to support long connections between the client and the server.

If the Web server sees the value here as "keep-alive", or if you see that the request uses an HTTP 1.1 (HTTP 1.1 default for persistent connections ), it can take advantage of the benefits of a persistent connection. When a page contains multiple elements (such as applets, pictures), it significantly reduces the time it takes to download. To do this, the Web server needs to send a content-length (the length of the message body) back to the client HTTP header information, the simplest way to do this is to write the content to the Bytearrayoutputstream first, and then To calculate the size of the content before it is formally written out.

whether the client browser (Internet Explorer) or the WEB server has a lower KeepAlive value, it will be a limiting factor . For example, if the timeout value for the client is two minutes and the Web server timeout is one minute, the maximum timeout is one minute. Either the client or the server can be a limiting factor

Add--connection:keep-alive to Header
Adding this in the HTTP protocol request and response will maintain a long connection.
The message application that encapsulates the HTTP message data body is very simple and easy to use

Http keep-alive seems to be massively misunderstood. Here's a short description The how it works, under both 1.0 and 1.1 HTTP/1.0

Under HTTP 1.0, there is no official specification to how keepalive. It is, in essence, tacked on to a existing protocol. If the browser supports keep-alive, it adds a additional header to the request:

Connection: keep-alive

Then, when the "server receives this" request and generates a response, it also adds a header to the response:

Connection: keep-alive

Following this, the connection isn't dropped, but is instead kept open. When the client sends another request, it uses the same connection. This would continue until either the client or the server decides that conversation are over, and one of them drops the connection. HTTP/1.1

Under HTTP 1.1, the official KeepAlive method is different. All connections are kept alive, unless stated otherwise with the following header:

Connection: Close

The Connection: keep-alive Header no longer has any meaning because of this.

Additionally, an optional keep-alive: The header is described, but was so underspecified as to be meaningless. Avoid it. Not reliable

HTTP is a stateless protocol-this means this every request is independent to other. Keep Alive doesn ' t change that. Additionally, there is no guarantee that the client or the server would keep the connection open. Even in 1.1, all this is promised's that you'll probably get a notice this connection is being. So keepalive are something you should don't write your application to rely. KeepAlive and POST

The HTTP 1.1 spec states that following the body of a POST, there are to is no additional characters. It also states that "certain" browsers could not follow this spec, putting a CRLF after the body of the POST. Mmm-hmm. As near as I can tell, most browsers follow a POSTed body with a CRLF. There are two ways of dealing with this:disallow keepalive in the context of a POST request, or ignore CRLF on a line by itself. Most servers deal with the latter way, but there ' s no way to know how a server would handle it without testing.

Java Applications

The client uses the Apache Commons-httpclient to execute method.
Use Method.setrequestheader ("Connection", "keep-alive" or "close") to control whether the connection is maintained.

Commonly used Apache, resin, Tomcat and so on have related configuration whether support keep-alive.

Tomcat can be set: maxkeepaliverequests

The maximum number of HTTP requests which can pipelined until the connection is closed by the server. Setting this attribute to 1 would disable http/1.0 keep-alive, as as the http/1.1 keep-alive and pipelining. Setting This to-1 would allow an unlimited amount to pipelined or keep-alive HTTP requests. If not specified, the This attribute was set to 100.

Explanation 1

Long connection refers to the establishment of a socket connection, regardless of whether it is used to remain connected, but less secure,
The so-called short connection refers to the establishment of a socket connection after the receipt of data received after the disconnect immediately, the general bank uses a short connection

Explanation 2

Long connections are those that are kept connected in TCP based traffic, regardless of whether the data is currently being sent or received.
The short connection is only when there is data transmission to connect, customer-server communication/Transfer data is closed after the connection.

Explanation 3

The concept of long connection and short connection seems to have been mentioned in the Mobile CMPP protocol, but not elsewhere.
Communication mode
There are two kinds of connection modes between the network elements: long connection and short connection. A long connection is one in which multiple packets can be sent continuously on a TCP connection, and if no packets are sent during TCP connection retention, both sides are required to send a detection packet to maintain the connection. Short connection refers to the communication between the two parties have data interaction, the establishment of a TCP connection, after the data sent to complete, then disconnect this TCP connection, that is, each TCP connection completes only a pair of CMPP messages sent.
At the present stage, it is required that long connection communication should be adopted between ISMG, and it is suggested that the communication mode of long connection between SP and ISMG should be adopted.

Explanation 4

Short connection: For example, HTTP, just connect, request, close, process time is short, if the server does not receive a request for a period of time to close the connection.
Long connection: Some services need to connect to the server for a long time, such as CMPP, generally need to do their own online maintenance.

I've been looking at "Server Push Technology ", in B/s structure, through some kind of magic so that the client does not need to poll to get the latest information on the service (such as stock prices), which can save a lot of bandwidth.      the traditional polling technology on the server is very much pressure, and caused a great waste of bandwidth. If you use Ajax polling instead, you can reduce the bandwidth load (because the server is not returning a full page), but the pressure on the server is not significantly reduced.     Push Technology (push) can improve this situation. However, because of the characteristics of HTTP connections (which must be initiated by the client), the implementation of the push technology is difficult, and it is common practice to push by extending the lifetime of the HTTP connection.     then it's natural to discuss how to extend the lifetime of the HTTP connection, the simplest of which is the dead loop method:     "servlet code fragment"      public void Doget (Request req, Response res) {          PrintWriter out = R Es.getwriter ();           ...           Normal Output page           ...            Out.flush ();           while (true) {                 out.print ("Output updated content");                 Out.flush ();                 Thread.Sleep (3000);          }      }      You can further improve performance if you use the Observer pattern.      But the disadvantage of this approach is that after the client requests this servlet, the Web server opens a thread to execute the servlet's code, and the servlet is slow to end, causing the thread to be freed. Thus, a client-side thread, when the number of clients increased, the server will still bear a heavy burden.      to fundamentally change this phenomenon is more complex, the current trend is to start from the Web server inside, withNiO (JDK 1.4) rewrites the implementation of the Request/response, and then leverages the thread pool to enhance the server's resource utilization to solve this problem, and currently supports this Java.nio official technology server hasGlassFish andJetty (the latter is only heard, never used). There are also some frameworks/tools that can help you implement push functions, such as pushlets.      But there was no in-depth study. These two days are ready to learn glassfish in the comet (Comet: Someone to the server to push technology name) support, hehe.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.