HTTP/2 exploring the first article--concept

Source: Internet
Author: User
Tags http strict transport security

Copyright notice: This article by Zhang Haojan original article, reprint please indicate source:
Article original link: https://www.qcloud.com/community/article/87

Source: Tengyun https://www.qcloud.com/community

I. Status

What is the bottleneck of network optimization now? You might say, bandwidth. Perhaps 2014 years ago, the key to performance is bandwidth, but today and beyond, the bottleneck is not bandwidth, but latency;

As you can see, with the increase in bandwidth, page load times (PLT page load time) have been greatly improved in the range of 1Mbps to 3Mbps, but the increase in bandwidth will be small, non-linear improvement; The improvement of the delay (here refers to the sum of multiple RTT times) is a linear improvement to the page load time;

1. http/1.1

A TCP connection is a three-time handshake, while multiple TCP connections will also bring resources to the server, in http/1.1, each request reply is a TCP connection (without opening keep-alive), and, while transmitting multiple resources, there will be a team first blocking problem, Causes the network resources to be unable to use effectively;

2. Security

For most people, the situation is almost always met (computer or mobile phone). The evil operators or network access WiFi providers hijack our network, modify the content of the network, has brought us a lot of trouble;

Two. http/2.0

Now, Http/2.0 has appeared. In fact, http/2.0 is supported Clear Text version and Over TLS version, because the existing support Http/2.0 browser is implemented over TLS version, so this article http/2.0 is to talk about is the HTTPS version http/2.0;

1. Clear text version:
    • The client requests to the server (assuming that scheme is HTTP at this time) with the following headers:
      Upgrade:h2c
      Http2-settings

    • Server-side return:
      101 Status Code, conversion protocol;
      Connection:upgrade
      UPGRADE:H2C or 200/404

2. http/2.0 over TLS version:
    • Client requests to server side
      TLS + ALPN (application Layer Protocol negotiation)/NPN

    • Server-side return:
      TLS handshake and returns the supported HTTP protocol;
      A. TLS Handshake detail Process

      B. ALPN consultation process
      Refer to the TLS handshake process diagram, which is the specific process for increasing ALPN negotiation:
      The client adds a ProtocolNameList field that contains the supported HTTP protocols into the ClientHello message;
      The server-side view ProtocolNameList field is returned by the Serverhello message ProtocolName field, indicating the selected protocol;
      By implementation ALPN , it is no longer necessary to request a single server on the tape Upgrade: h2c ;
      C. False Start
      Typically, using ALPN with false Start, the client sends the encrypted application data ahead of the TLS handshake and reduces the two RTT TLS handshake to one time, but needs to support both ALPN (NPN is seldom used) and forward security;
      D. HSTS
      The HTTP Strict Transport security (referred to as HSTs) is a secure feature that tells the browser to access the current resource only through HTTPS, prohibiting HTTP mode.
      If the user input domain name www.qq.com , the browser will first go to request http://www.qq.com, the request process is plaintext non-encrypted, at this time easy to be a man-in-the-middle attack, let the network malicious intermediaries directly contact the user information, and HSTs is the user request, the server tells the client, The next time to request https:// a direct request, and do not request the server to jump to HTTPS;
      At the same time, when the hsts is turned on, if the certificate does not pass through (such as a man-in-the-middle attack), the browser force cannot open the website

3. Noun interpretation

Stream: A stream is a two-way channel that contains one or more messages, IDs, and priorities;
Message: The message is made up of frames;
Frames: frames have different types and are mixed. They are reassembled into the message via the stream ID;

4. Conceptual interpretation

A. Binary frames

HTTP2 binary Frame is 9 bytes (.)
Length: 24bit, which is theoretically capable of carrying 2^24 bytes of data. However, it is usually not possible to send more than 2^14 (16384) bytes of data due to settings_max_frame_size settings;
Type: 8bit, determines the type of the frame;

    • Data: Frame
    • HEADERS: Head Frame
    • Priority: Set priorities for a stream
    • Rst_stream: Terminating flow
    • SETTINGS: Setting Connection parameters
    • Push_promise: Server push analog Request frame
    • PING: The stream used to calculate the RTT time and see if the server is hung
    • Goaway: Tell the other person to stop creating a stream to the current connection
    • Window_update: Flow Control
      Reserved fields: 1bit, usually 0;
      Stream Id:31bit,stream logo, theoretically can have 2147483648, over so many stream how to do?
      If the client cannot create a new stream ID, a new TCP connection can be created directly, and the stream ID is reset;
      If the server side can no longer create a new stream ID, the server will send a goaway frame to the client, the client can no longer create a stream to the server, and have to make a new TCP connection;
5. New Features

A. Multiplexing

In http/2.0, data is cut into smaller data frames at the sending end for efficient use of links;
In the HTTP 1.1 era, when Keep-alive is not turned on, each request consumes a TCP connection, and HTTP/2 splits the request and response messages into separate frames, staggered sends, and then assembles the Assembly at the receiving end. What good is it?
Interleaved multiple requests/responses are not blocked

    • The keep-alive of the http/1.1 era is also to maintain the same TCP connection, but because of the request/receive order, the subsequent request resources will be blocked by the preceding resource (no new requests will be sent when the response is confiscated), as shown on the leftmost and rightmost, even if compared to the HTTP pipeline, the optimization is huge:

      Reduce unnecessary delay, improve the utilization of the network (multiplexing and resource prioritization/dependency collocation, so that the page is heavily dependent on the priority transmission of resources);

B. Head compression
Http/2.0 used HPACK to compress the head;

    • The value is encoded by Huffman;
    • Previously sent values are indexed, and then used to discover that the header field was previously sent, and the value is the same, the previous index will be followed to refer to the header value;
    • Cookies: In http/2.0, the cookie will also become a key-value pair indexed, rather than a long string of strings;

Can see our group dream Classmate's http/2.0 characteristics of popular science-head compression, which has some of the data about the effect of compression;

Here we need to explain the pseudo-header fields:

Request:

    • Authority
    • Method
    • Path
    • Scheme

Response:

    • Status

All pseudo-header fields are in the front of all headers;

C. Resource prioritization/dependencies
Resource priority/dependencies are set through stream权重 and dependency to;

As can be seen, there is a column is called priority, the initial setting is based on Content-type to set priorities, such as HTML is Highest,css is high, and then JS is medium;
The Stream weight value can be set from 1 to 256;
Stream can express the dependency relationship clearly;
Note that it is important to understand weights and dependencies, and that weight values and dependencies are recommended values as bandwidth resource/server/client processing resources, but they are not guaranteed to have a specific transmission order. Let's look at a http/2.0 dependency and weight graph:

The stream in http/2.0 is dependent on a root stream by default (it does not exist). The weight value is calculated for the peers, and the non-sibling is not used to calculate;

D. Flow control
Similar to TCP traffic control, but http/2.0 traffic control can be to specific frames, while TCP is on the TCP connection level. Note: Flow control is only valid for data frames at this time! The flow control algorithm does not specifically require which one to use, but the approximate functionality is as follows:

    • The two ends transmit and receive a flow control (window);
    • Each data frame sent by the sender, the size of the window will be reduced, decreasing the size of the frame, if the window size is smaller than the size of the frame, then the frame must be split. If the window value is equal to 0, no frame can be sent. The initial default window value for flow control is 65535 bytes (theoretically, you can set a window value of 2^31-1 bytes, which is a 2147483647-byte size).
    • The receiver can send the window_update frame to the sending end, and the sending end is added to the window size limit in the window size increment specified in the frame.

E. Server Push
Server push Resources also need to follow the same-origin policy, through :authority来 judgment;

As shown in the demo, if the server-side setting is pushed when the request is Path/examples/dashboard /examples/dashboard/d3.js , now let's look at the grab packet:

Description
When the client requests the server (at this point the request path is already set to push), the server sends back an PUSH_PROMIS E and two frames HEADERS , as can be seen from stream identifier, the first HEADERS stream ID is 1, That is, reusing the requested stream to return (this is the return response header for the HTML file). The second headers is the response header of a push file.
By definition, the identifier of the stream initiated by the client initialization is odd, and the stream initiated by the server-side initialization is an even number, which can be reflected in the diagram;
So what is the order of stream 1 and stream 2 guaranteed? There is a sentence in the description document:

Pushed streams initially depend on their associated stream.

That is, the resources the server is about to push depend on the request that triggered the push, and depending on the function the stream relies on, only the stream that is dependent is loaded and then the next stream is loaded;

What are the benefits of Server push:
Push resources can be cached by the client;
Push resources can be reused by different pages;
Push resources are also supported for multiplexing;
Push resources can be rejected by the client (after the client receives the Push_promise, you can choose to send rst_promise to refuse to receive, tell the server not to send again, of course, at this point may already have some content has been sent over);
At the same time, the Server push with traffic control, you can achieve a lot of amazing features, here sell a Xiaoguanzi, and then the next article will explain:)

Reference
HTTP/2 characteristics of Popular science articles
"High performance Browser Networking"--ilya Grigorik;
"Web Performance Authoritative Guide"-lisongfeng translation;
Jerry Qu Blog in the HTTP/2 topic;
http/2.0 RFC7540
http/2.0 agreement in English--Baidu Fex

Article from the public number: Little Time teahouse (Tech teahouse)

HTTP/2 exploring the first article--concept

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.