HTTP 2.0 principle Detailed analysis __http-2

Source: Internet
Author: User
Tags http 2

HTTP 2.0 is the next generation of Internet communications protocol based on the Spdy (an experimental protocol for a faster web, the Chromium Projects). The purpose of HTTP/2 is to reduce the protocol overhead by compressing the HTTPS header field, while increasing the request priority and server-side push support, by supporting request and response multiplexing for less latency.
The purpose of this article is to learn the principles of HTTP 2.0 and to study the details of its communication. Most of the knowledge points are derived from the Web Performance Authority Guide.

1. Binary Frame Layer 1.1 frame (frame) 1.2 messages (message) 1.3 Stream (Stream) 2. Multiplexing shared connection 3. Request Priority 4. Service-side push 5. The first compression 6. A complete HTTP 2.0 communication process 6.1 based on the ALPN negotiation process 6.2 HTTP based negotiation process 6.3 full communication process 7. HTTP 2.0 Performance Bottleneck reference

1. Binary frame layer

The binary frame layer is the core of the HTTP 2.0 performance enhancement.
HTTP 1.x communicates in plain text at the application layer, while HTTP 2.0 splits all transmission information into smaller messages and frames and encodes them in binary format. In this way, both the client and the server need to introduce a new binary encoding and decoding mechanism.
As the following illustration shows, HTTP 2.0 does not change the semantics of HTTP 1.x, but is transmitted using binary frames only at the application layer.

Therefore, a new communication unit has been introduced: 1.1 frames (frame)

The smallest unit of HTTP 2.0 communication, including frame header, stream identifier, priority value, and frame net load.

The frame type can be divided into: data: Used to transmit the HTTP message body; HEADERS: For transport header fields; settings: Used to contract configuration data for client and server ends. For example, set the first-known bidirectional flow control window size; window_update: The flow used to adjust individual flows or individual connections PRIORITY: Specifies or assigns priority to a reference resource. Rst_stream: The abnormal termination used to notify the stream. Push_ PROMISE: Service-side push license. PING: Used to calculate round-trip time, perform "active" Check alive. Goaway: Used to notify the end-to-end stop from creating a stream in the current connection.

The flag bit is used to define specific message flags for different frame types. For example, the data frame can use end stream:true to indicate that the message has finished communicating. The stream identity bit represents the stream ID to which the frame belongs. The precedence value is used for headers frames, representing the request priority. R represents a reserved bit.
The following is a data frame for the Wireshark grab package:
1.2 messages (message)

A message is a logical HTTP message (Request/response). A series of data frames form a complete message. For example, a series of data frames and a headers frame compose the request message. 1.3 Stream (stream)

A stream is a virtual channel in a connection that can host bidirectional message transmissions. Each stream has a unique integer identifier. To prevent conflicting flow IDs, a client-initiated stream has an odd ID, and a stream originating on the server side has an even ID.
All HTTP 2. 0 communication is done on a TCP connection that can host any number of bidirectional stream streams. Accordingly, each data stream is sent as a message, and the message consists of one or more frames, which can be sent in random order, and then reassembled according to the stream identifier for each frame header.

The binary frame layer retains the HTTP semantics unaffected, including the header, method, and so on, which, in the application layer, is no different from the HTTP 1.x. At the same time, all communication with the host can be done on a TCP connection . 2. Multiplex Shared connection

Based on the binary frame layer, HTTP 2.0 can send requests and responses on the basis of shared TCP connections. HTTP messages are decomposed into separate frames without breaking the semantics of the message itself, interleaved, and finally regroup them at the other end based on the flow ID and header.
Let's compare HTTP 1.x and HTTP 2.0, assuming that regardless of the 1.x pipeline mechanism, each of the four tiers is a TCP connection. The client initiates three picture request/image1.jpg,/image2.jpg,/image3.jpg to the service degree.
HTTP 1.x initiates the request is serial, the Image1 returns can not initiate after Image2,image2 returns again can initiate the image3.

HTTP 2.0 to establish a TCP connection, the parallel transmission of 3 data streams, the client to the server to send a sequence of stream1~3 a series of data frames, while the server is already returning stream 1 of the data frame

Performance contrast, stand up to see. HTTP 2.0 successfully resolves the HTTP 1.x's team head blocking problem (TCP layer blocking is still unresolved), and there is no need to implement parallel requests and responses through multiple TCP connections to the pipeline mechanism. Reduce the number of TCP connections to server performance has also been greatly improved. 3. Request Priority

A stream can have a priority of 31bit: 0: Highest priority 231-1: represents the lowest priority

The client explicitly assigns a priority, and the server can base the interaction data on that priority, such as setting the client priority to. css>.js>.jpg (see "High Performance Web site Building Guide"), service-side return by priority results in the efficient use of the underlying connection, improve the user experience.
However, it is not too superstitious to request priority, still note the following issues: whether the service side supports request priority will cause a team head blocking problem, such as high priority slow response requests blocking other resource interactions. 4. Service-side push

HTTP 2.0 increases the service-side push function, the server can return multiple responses in advance and push additional resources to the client according to the client's request. As shown in the following illustration, the client requests stream 1,/page.html. The server pushes stream 2 (/script.js) and Stream 4 (/STYLE.CSS) while returning stream 1 messages.

Push_promise frames are signals that the server intentionally pushes resources to the client. If the client does not require a service-side PUSH, you can set the service-side stream value to 0 in the settings frame, disabling this feature push_promise the header in the frame that contains only the pre-push resource. If the client does not have an opinion on the push_promise frame, the server sends the response data frame after the push_promise frame to start pushing the resource. If the client has cached the resource and does not need to be pushed again, you can choose to reject the push_promise frame. Push_promise must follow the request-response principle and can only push resources by responding to requests.
Currently, Apache MOD_HTTP2 can turn on the H2push on service-side push push. The Nginx Ngx_http_v2_module does not yet support the service-side push.

Apache Mod_headers Example
<Location/index.html>
    Header add Link "</css/site.css>;rel=preload"
    Header Add Link "</images/logo.jpg>;rel=preload"
</Location>
5. Header compression

HTTP 1.x each communication (Request/Response) carries the header information to describe the resource properties. HTTP 2.0 uses the "first table" between the client and server to track and store the key-value pairs that were sent before. The first table always exists during the connection process, and the new key-value pairs are updated to the end of the table, so you do not need to carry the header for each communication.

In addition, HTTP 2.0 uses the first compression technique, and the compression algorithm uses Hpack. Can make the header more compact, faster transmission, conducive to mobile network environment.
It is important to note that HTTP 2.0 is concerned with the first compression, and our commonly used gzip is the message content (body) compression. Both do not conflict, and can achieve better compression effect together. 6. A complete HTTP 2.0 communication process

Consider a question about how the client knows whether the server supports HTTP 2.0. Whether to encode and decode the binary frame layer is supported. Therefore, there must be a process of protocol negotiation before using HTTP 2.0 to communicate at both ends. 6.1 based on the ALPN negotiation process

Browsers that support HTTP 2.0 can negotiate spontaneously with the protocol at the TLS session level to determine whether to use HTTP 2.0 communication. The principle is that the extension field is introduced in TLS 1.2 to allow the extension of the protocol, where the ALPN protocol (application Layer Protocol negotiation, the Application layer protocol negotiation, formerly NPN) is used for the client-server protocol negotiation process.
The service side uses ALPN, listens to 443 ports to increase HTTP 1.1 by default, and allows negotiation of other protocols, such as Spdy and HTTP 2.0.
For example, the client indicates that it supports HTTP 2.0 in the TLS handshake client Hello Phase

When the server receives it, respond to Hello, which means that it also supports HTTP 2.0. Both sides begin HTTP 2.0 communication.
6.2 HTTP-based negotiation process

However, HTTP 2.0 must be the privilege of HTTPS (TLS 1.2).
Of course not, the client can also open HTTP 2.0 communication using HTTP. Just because of HTTP 1. 0 and HTTP 2. 0 uses the same port (80) and does not have a server that supports HTTP 2. 0 Any other information, at which point the client can only use the HTTP upgrade mechanism (OKHTTP, NGHTTP2, etc., can be implemented, or encode it itself) to determine the appropriate protocol through coordination:

HTTP Upgrade Request
get/http/1.1
host:nghttp2.org
connection:upgrade, http2-settings
upgrade:h2c        /* Initiate a request with HTTP2.0 upgrade head * * *       
http2-settings:aamaaabkaaqaap__   * Client Settings NET load * * *
user-agent: Nghttp2/1.9.0-dev

http Upgrade response    
http/1.1 switching/   * Server agreed to upgrade to HTTP protocols
Connection:upgrade
upgrade:h2c

HTTP Upgrade Success/               * negotiation Complete * *
6.3 Full Communication process

TCP connections are established:

TLS handshake and HTTP 2.0 communication process:

In addition, the chrome://net-internals/#http2命令也能捕获HTTP 2.0 communication processes in Chrome:

42072:http2_session textlink.simba.taobao.com:443 (PROXY 10.19.110.55:8080) Start time:2017-04-05 11:39:11.459 t=
                     370225 [st= 0] +http2_session [dt=32475+]--> host = "textlink.simba.taobao.com:443"
                       --> proxy = "proxy 10.19.110.55:8080" t=370225 [st= 0] Http2_session_initialized --> protocol = "H2"--> source_dependency = 42027 (proxy_client_socket_wrapper) t=370225 [ st= 0] http2_session_send_settings--> SETTINGS = [Id:3 flags:0 value:1000] "," [Id:4 flag
                       s:0 value:6291456] "," [Id:1 flags:0 value:65536] "] t=370225 [st= 0] Http2_stream_update_recv_window --> Delta = 15663105--> window_size = 15728640 t=370225 [st= 0] Http2_session_se  Nt_window_update_frame--> delta = 15663105--> stream_id = 0 t=370225    [st= 0] Http2_session_sEnd_headers--> exclusive = True--> fin = True --> has_priority = True-->: Method:GET:authority:textlink.sim Ba.taobao.com:scheme:https:p Ath:/?NAME=TBHS&AMP;CNA=IAJ9EOY3FNGC
                           Axbq5kj9yush&nn=&count=13&pid=430266_1006&_ksts=1491363551394_94&callback=jsonp95 user-agent:mozilla/5.0 (Windows NT 6.1;
                           WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/56.0.2924.87 safari/537.36 Accept: */* Referer:https://www.taobao.com/accept-encoding:gzip, deflate, SDCH, 
                       BR accept-language:zh-cn,zh;q=0.8 Cookie: [382 bytes were stripped]
  --> parent_stream_id = 0--> stream_id = 1                     --> weight = 147 t=370256 [st=] http2_session_recv_settings--> Host = "textlink.simba.taobao.com:443" t=370256 [st=] http2_session_recv_setting--> FLA GS = 0--> id = 3--> value = 128 t=370256 [st=] Http2_session _update_streams_send_window_size--> delta_window_size = 2147418112 t=370256 [st=] HTTP2
                       _session_recv_setting--> flags = 0--> id = 4
                       --> value = 2147483647 t=370256 [st=] http2_session_recv_setting--> flags = 0 --> id = 5--> value = 16777215 t=370256 [st=] Http2_session_receive D_window_update_frame--> delta = 2147418112--> stream_id = 0 t=37025 6 [st=] Http2_seSsion_update_send_window--> delta = 2147418112--> window_size = 2147
                       483647 t=370261 [st=] http2_session_recv_headers--> fin = False -->: status:200 date:wed, APR 2017 03:39:11 GMT Content-type : text/html;
                           Charset=iso-8859-1 vary:accept-encoding Server:tengine
                           expires:wed, APR 2017 03:39:11 GMT cache-control:max-age=0
                           Strict-transport-security:max-age=0 Timing-allow-origin: *
                       Content-encoding:gzip--> stream_id = 1 t=370261 [st=] Http2_session_recv_data
--> fin = false--> size =--> stream_id = 1 t=370261 [sT=] Http2_session_update_recv_window--> delta = -58--> Windo
                       W_size = 15728582 t=370261 [st=] http2_session_recv_data--> fin = True
                       --> size = 0--> stream_id = 1 t=370295 [st=] Http2_stream_update_recv_window  --> Delta =--> window_size = 15728640 t=402700 [st=32475]
7. HTTP 2.0 Performance Bottleneck

If HTTP 2.0 is enabled, performance will inevitably improve. Nothing is absolute, although performance is definitely going to improve overall.
I think HTTP 2.0 will bring a new performance bottleneck. Because all of the pressure is now focused on the bottom of a TCP connection, TCP is likely to be the next performance bottleneck, such as the TCP packet head block problem, a single TCP packet lost caused the entire connection blocked, unable to escape, at this time all messages will be affected. In the future, the server side is critical for TCP configuration optimization under HTTP 2.0, and we have the opportunity to follow up. Reference Documents

"Web Performance Authority Guide"
"Use NGHTTP2 Debug HTTP/2 Flow" https://imququ.com/post/intro-to-nghttp2.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.