HTTP protocol Development History

Source: Internet
Author: User

I. HTTP version 0.9

1991 release 0.9, with only one GET command.

For example: get/index.html indicates that the browser can only receive strings returned in HTML format, and the server closes the TCP link when it is sent.

Disadvantage: Data format is too single, each TCP link can only send a request, because the TCP link establishment and shutdown requires three handshake and four waves, poor performance.

Two. HTTP version 1.0

Released in May 1996 version 1.0, the following additions were added:

1. Any format content can be sent, such as video, audio files and so on.

2. The new POST and HEAD command is added.

3. The request and response must contain header information each time.

Disadvantage: Each TCP link can only send one request, due to the establishment and shutdown of the TCP link requires three handshake and four waves, poor performance.

To solve this problem, many browsers have added a nonstandard field to the header: Connection. such as: connection:keep-alive

Three. HTTP version 1.1

Released in January 1997 version 1.1, until now is still the most popular version. The following additions were added:

1. Long connection. That is, the TCP link is not closed by default and can be reused by multiple requests without declaring connection:keep-alive. However, the canonical approach is for the client to declare the close link on the last request: Connection:close

2. Piping mechanism. That is, within the same TCP connection, the client can send multiple requests at the same time. This further improves the efficiency of the HTTP protocol.

For example, a client needs to request two resources. Previously, in the same TCP connection, send a request first, and then wait for the server to respond, receive and then issue a B request. The pipeline mechanism is to allow the browser to issue a and B requests at the same time, but the server is in order, the first response to a request, complete and then respond to the B request.

The 3.connection-length field. 1.0 revision fields are not required

A TCP connection can now transmit multiple responses, which is bound to have a mechanism to differentiate which response a packet belongs to. This is the function of the Content-length field, declaring the data length of this response.

4. Chunked transfer encoding

The precondition for using the Content-length field is that the data length of the response must be known before the server sends a response.

For some time-consuming dynamic operations, this means that the server waits for all operations to complete before it can send data, which is obviously inefficient. A better approach is to produce a piece of data, send a piece, and replace the "cache mode" (buffer) with a stream mode.

Therefore, version 1.1 specifies that the Content-length field can be used instead of using the "chunked transfer Encoding" (chunked transfer encoding). As long as the header information for the request or response has a transfer-encoding field, the response will be made up of an undetermined number of data blocks.

For example, the service-side response Headr RigaTransfer-Encoding: chunked

5. Add the Host field to the request header to specify the domain name of the server.

6. Several additional commands have been added. PUT,,, PATCH OPTIONS , DELETE .

Cons: Although version 1.1 allows multiplexing of TCP connections, all data communication is performed sequentially in the same TCP connection. The server will only take the next response if it finishes processing a response. If the previous response is particularly slow, there will be many requests waiting in the back.

Four. HTTP Version 2

2015, HTTP/2 released. It is not called http/2.0, because the standard Committee does not intend to release the child version again, the next new version will be HTTP/3. The following new features are available:

1. Pure binary protocol. http/1.1 version of the header information is definitely text (ASCII encoding), the data body can be text, or it can be binary. HTTP/2 is a complete binary protocol in which the header information and data bodies are binary and collectively referred to as "frames": header information frames and data frames.

2. Multi-work. HTTP/2 multiplexing TCP connections, in which both the client and the browser can send multiple requests or responses at the same time, and do not correspond in order one by one, thus avoiding "team head clogging".

For example: In a TCP connection, the server received both a request and a B request, so the first response to a request, the results found that the processing process is very time-consuming, so send a request has been processed, and then respond to the B request, after the completion, then send a request the remainder, namely two-way, real-time communication.

3. Data flow. Because HTTP/2 packets are sent out of sequence, successive packets within the same connection may belong to different responses. Therefore, the packet must be marked to indicate which response it belongs to.

HTTP/2 will each request or response of all packets, called a stream. Each data stream has a unique number. When a packet is sent, the data stream ID must be marked to distinguish which data stream it belongs to. In addition, the client sends out the data stream, the ID is all odd, the server issued, the ID is even.

When the data stream is sent in half, both the client and the server can send a signal ( RST_STREAM frame) to cancel the traffic. Version 1.1 The only way to cancel the data flow is to close the TCP connection. This means that HTTP/2 can cancel a request, while ensuring that the TCP connection is still open and can be used by other requests. The client can also specify the priority of the data flow. The higher the priority, the sooner the server responds.

4. Header information compression. The HTTP protocol does not have a status, and each request must have all the information attached. Therefore, many of the requested fields are duplicates, such as Cookie and User Agent , exactly the same content, each request must be accompanied, which will waste a lot of bandwidth, also affect the speed.

HTTP/2 has optimized this point by introducing the header information compression mechanism (header compression). On the one hand, the header information is used gzip or compress compressed before sending; On the other hand, the client and the server maintain a header information table, all the fields are stored in the table, generate an index number, and then do not send the same field, only send the index number, which increases the speed.

5. Push the server. HTTP/2 allows the server to proactively send resources to clients without request, which is called Server push.

A common scenario is a client requesting a Web page that contains a lot of static resources. Normally, the client must receive the Web page, parse the HTML source, find a static resource, and then issue a static resource request. In fact, the server can expect the client to request the Web page, it is likely to request static resources, so they proactively send these static resources along with the Web page to the client.

HTTP protocol Development History

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.