HTTP for long connections (TTP1.1 and HTTP1.0 Compare, the biggest difference is the increase of persistent connection support connection:keep-alive)

Source: Internet
Author: User
Tags glassfish keep alive stock prices

HTTP for long connections

HTTP is stateless
That is, the browser and the server each HTTP operation, the connection is established, but the end of the task to disconnect. If a client browser accesses an HTML or other type of Web page that contains other Web resources, such as JavaScript files, image files, CSS files, and so on, when the browser encounters such a Web resource, an HTTP session is created

The biggest difference between HTTP1.1 and HTTP1.0 is that it adds support for persistent connections (which looks like the latest http1.0 can display the specified keep-alive), but is stateless, or not trusted.

If the browser or server joins this line of code in its header information

Connection:keep-alive

The TCP connection remains open after it is sent, so the browser can continue to send requests through the same connection. Maintaining a connection saves the time it takes to establish a new connection for each request and also saves bandwidth.

Long connections are supported for both the client and the server.

If the Web server side sees the value here as "keep-alive", or sees the request using an HTTP 1.1 (HTTP 1.1 is persistent by default ), it can take advantage of the persistent connection When a page contains more than one element (such as an applet, a picture), it significantly reduces the time it takes to download. To do this, the Web server needs to send a content-length (the length of the message body) header back to the client HTTP header, and the simplest implementation is to write the content to Bytearrayoutputstream first, and then Calculate the size of the content before it is formally written

It is a limiting factor whether the client browser (Internet Explorer) or the WEB server has a lower KeepAlive value . For example, if the timeout value for the client is two minutes, and the WEB server's timeout is one minute, the maximum timeout is one minute. A client or server can be a limiting factor

Add--connection:keep-alive to the header
Adding this to the HTTP protocol request and response will maintain a long connection.
Re-encapsulation of the HTTP message data body message application is very easy to use

Http keep-alive seems to be massively misunderstood. Here's a short description of about it works, under both 1.0 and 1.1

HTTP/1.0

Under HTTP 1.0, there is no official specification for how KeepAlive operates. It is, in essence, tacked on to an existing protocol. If the browser supports keep-alive, it adds a additional header to the request:

Connection: keep-alive

Then if the server receives this request and generates a response, it also adds a header to the response:

Connection: keep-alive

Following this, the connection are not dropped and is instead kept open. When the client sends another request, it uses the same connection. This would continue until either the client or the server decides that the conversation are over, and one of the them drops the connection.

HTTP/1.1

Under HTTP 1.1, the official KeepAlive method is different. All connections is kept alive, unless stated otherwise with the following header:

Connection: Close

The Connection: keep-alive Header no longer have any meaning because of this.

Additionally, an optional keep-alive: header was described, but was so underspecified as to be meaningless. Avoid it.

Not reliable

HTTP is a stateless protocol-this means this every request is independent of every. Keep Alive doesn ' t change that. Additionally, there is no guarantee that the client or the server would keep the connection open. Even in 1.1, all that's promised is so you'll probably get a notice that's theconnection is being closed. So keepalive are something you should not write your application to rely upon.

KeepAlive and POST

The HTTP 1.1 spec states that following the body of a POST, there is to be no additional characters. It also states that ' certain ' browsers may not follow this spec, putting a CRLF after the body of the POST. Mmm-hmm. As near as I can tell, most browsers follow a POSTed body with a CRLF. There is ways of dealing with this:disallow keepalive in the context of a POST request, or ignore CRLF on a line by itself. Most of the servers deal with the the latter-on-the-there ' s no-to-know how a server would handle it without testing.

Java applications

The client uses Apache's Commons-httpclient to execute method.
Use Method.setrequestheader ("Connection", "keep-alive" or "close") to control whether the connection is maintained.

Common Apache, resin, Tomcat and so on have the relevant configuration whether support keep-alive.

Tomcat can be set in:maxKeepAliveRequests

The maximum number of HTTP requests which can be pipelined until the connection are closed by the server. Setting this attribute to 1 would disable http/1.0 keep-alive, as well ashttp/1.1 keep-alive and pipelining. Setting This to-1 would allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute are set to.



Explanation 1

The so-called long connection refers to the establishment of a socket connection, regardless of whether or not to remain connected, but less security,
The so-called short connection refers to the establishment of a socket connection after the data received after the end of the connection, the general bank used short connection

Explanation 2

A long connection is a connection that is maintained in TCP-based communication, regardless of whether the data is currently being sent or received.
A short connection is a connection that is only made when there is data transfer, and the client-server communication/Transfer data is closed when the connection is completed.

Explanation 3

The concept of long connections and short connections seems to be only mentioned in the moving CMPP protocol, where other places have not been seen.
Communication mode
There are two types of connections between the various network elements: long connections and short connections. The so-called long connection, refers to a TCP connection can be continuously sent multiple packets, during the TCP connection remains, if no packet is sent, both sides need to send a detection packet to maintain this connection. Short connection refers to the communication between the two sides of the data interaction, a TCP connection is established, after the data is sent, the TCP connection is disconnected, that is, each TCP connection completes only a pair of CMPP messages sent.
At present, it is necessary to use long-connected communication mode between ISMG, and it is recommended to use long connection between SP and ISMG.

Explanation 4

Short connection: For example, HTTP, just connect, request, close, the process time is short, the server will not receive a request for a period of time to close the connection.
Long connections: Some services need to be connected to the server for a long time, such as CMPP, which usually needs to be maintained online.



Recently looking at "Server Push Technology", in the B/s structure, through some magic so that the client does not need to be polled to get the latest information on the service side (such as stock prices), which can save a lot of bandwidth. The traditional polling technology is very stressful to the server, and it creates a huge waste of bandwidth.     If you use Ajax polling instead, you can reduce the load on the bandwidth (because the server is not returning a full page), but there is no noticeable reduction in the pressure on the server. Push technology can improve this situation. But becauseThe nature of the HTTP connection (which is ephemeral and must be initiated by the client) makes the implementation of the push technology difficult, and the common practice is to implement push by prolonging the lifetime of the HTTP connection. The next step is to discuss how to prolonghttp the life of the connection, the simplest natural is the dead loop method:      servlet Code Snippets      public void doget (Request req, Response Res) {           PrintWriter out = Res.getwriter ();           ......          Normal Output page            ......          Out.flush ();           while (true) {                 out.print ("Output updated content");                 Out.flush ();                 thread.sleep (;         })      }      If you use observer mode, you can further improve performance.     &nbSp However, the disadvantage of this approach is that after the client requests the servlet, the Web server opens a thread to execute the servlet's code, and the servlet will not end, causing the thread to be freed. Thus, a client is a thread, and when the number of clients increases, the server still bears a heavy burden.       to fundamentally change this phenomenon is more complex, the current trend is to start from within the Web server, with NIO (JDK 1.4 proposed Java.nio package) rewrite the implementation of Request/response, The use of thread pooling to enhance the resource utilization of the server to solve this problem, currently support this non-Java official technology of the server has GlassFish and jetty (the latter just heard, not used).       There are also frameworks/tools that can help you implement push functions such as pushlets. But not in-depth research.       two days to learn about Comet (Comet: Someone to the server push technology name) support, hehe. GlassFish

Http://www.cnblogs.com/lidabo/p/4585900.html

HTTP for long connections (TTP1.1 and HTTP1.0 Compare, the biggest difference is the increase of persistent connection support connection:keep-alive)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.