There are many versions of HTTP, each version has its own differences, this article is
HTTP
An overview and summary of the main features of different versions, hoping to help everyone.
HTTP1.0
Earlier 1.0
HTTP
versions were a stateless, non-connected application-layer protocol.
HTTP1.0
The browser and the server to maintain a short connection, each browser needs to establish a connection with the server TCP
, the server processing is completed immediately after the TCP
connection (no connection), the server does not track each client or log past requests (stateless).
This stateless nature can be used cookie/session
to authenticate and state records with mechanisms. And the following two questions are more troublesome.
First, the non-connected feature causes the biggest performance flaw to be that the connection cannot be reused . Each time you send a request, you need to make a TCP
connection, and TCP
the connection release process is more troublesome. This non-connected feature makes the network utilization very low.
The second is that the team head is blocked ( head of line blocking
). Because HTTP1.0
the next request must be sent before the previous request response arrives. Assuming that the previous request response has not been reached, the next request is not sent, and the same subsequent request is blocked.
In order to solve these problems, HTTP1.1
there has been.
HTTP1.1
For HTTP1.1
, not only inherited the HTTP1.0
simple characteristics, but also overcome a lot HTTP1.0
of performance problems.
The first is the long connection , the HTTP1.1
addition of a Connection
field, by setting Keep-Alive
can keep the HTTP
connection constantly open, avoids each client and server requests to repeatedly establish a release to establish TCP
a connection, improve the network utilization. If the client wants to close HTTP
the connection, it can be carried in the request header Connection: false
to tell the server to close the request.
Second, it HTTP1.1
supports request pipelining ( pipelining
). Long-based HTTP1.1
connections make it possible to request pipelining. Pipelining allows requests to be transmitted in parallel. For example, if the subject of the response is a html
page, the page contains a lot of img
, this time keep-alive
has played a big role in the ability to send multiple requests in parallel. (The client based on the domain name to establish a connection to the server, the general PC browser for a single domain name of the server at the same time to establish 6~8
a connection, mobile phone is generally controlled in 4~6
one.) This is also why many large sites have different static resource CDN domain names to load resources. )
It is important to note that the server must loop back the corresponding results in the order in which the client requests are made to ensure that the client can differentiate the response from each request.
In other words, HTTP
pipelining allows us to migrate the FIFO queue from the client (request queue) to the server (response queue).
, the client also sent two requests to obtain html
and css
, if the server's resources are css
first ready, the server will be sent before sending html
css
.
At the same time, pipelining technology simply allows the client to send a set of requests to a server at the same time, and if the client wants to initiate another set of requests to the same server, it must wait for the last set of requests to complete.
Visible, HTTP1.1
solve the team head block ( head of line blocking
) is not yet thorough. At the same time, "pipelining" technology has a variety of problems, so many browsers either do not support it at all, or directly by default, and the conditions are very harsh ...
In addition, HTTP1.1
cache processing (strong cache and Negotiate cache [Portal])is added, support for breakpoint transfers , and the addition of the host field (enabling a server to create multiple Web sites).
HTTP2.0
HTTP2.0
The new features are broadly as follows:
Binary Sub-frame
HTTP2.0
By adding a binary sub-frame layer between the application layer and the transport layer, the performance limit is exceeded and HTTP1.1
the transmission performance is improved.
It can be seen that although HTTP2.0
the protocols and HTTP1.x
protocols are completely different, HTTP2.0
there is actually no change HTTP1.x
in semantics.
Simply put, the HTTP2.0
original HTTP1.x
header
and body
partial frame
re-encapsulation of a layer only.
Multiplexing (Connection Sharing)
Here are a few concepts:
Stream ( stream
): a bidirectional byte stream has been established on the connection.
Message: A complete series of data frames corresponding to the logical message.
FRAME ( frame
): HTTP2.0
the smallest unit of communication, each frame contains the frame header, and at least identifies the stream () to which the current frame belongs stream id
.
It is visible that all HTTP2.0
communication is done on a single connection that can host any number of bidirectional data streams.
Each data stream is sent as a message, and the message consists of one or more frames. These frames can be sent in random order and then reassembled based on the stream identifier () of each frame header stream id
.
For example, each request is a data stream, the stream is sent as a message, and the message is divided into multiple frames, and the frame header records the stream id
data stream to identify the owning, and the different genera of frames can be randomly mixed together in the connection. The receiver can stream id
then assign frames to their respective requests.
Additionally, multiplexing (Connection Sharing) can cause critical requests to be blocked. HTTP2.0
each data stream can be prioritized and dependent, the high-priority traffic is processed and returned to the client by the server, and the data flow can also rely on other sub-data streams.
Header compression
In HTTP1.x
, the header metadata is sent in plain text, typically increasing the load of 500~800 bytes per request.
For example cookie
, by default, the browser will be attached to the server each time the request is made cookie
header
. (because it is cookie
large and repeated each time, the information is generally not stored, only used for status records and identity authentication)
HTTP2.0
Use encoder
to reduce the need to transfer the size of the header
two sides of the communication cache
header fields
table, both to avoid duplication header
of transmission, but also reduce the need to transfer the size. High-efficiency compression algorithms can be very compressed header
, reducing the number of packets sent and thus reducing latency.
Server push
In addition to the server's response to the initial request, the server can also push resources to the client in additional, without the need for explicit client requests.
Summarize
HTTP1.0
HTTP1.1
HTTP2.0