HTTP/2 note streams and multiplexing

Source: Internet
Author: User


Zero. Objective

This section will explain the definition and use of convection in the HTTP/2 protocol, in fact, it is said that HTTP/2 is how to achieve multiplexing.

One. The relationship between flow and multiplexing 1. The concept of flow

Stream, the server and client are used in the HTTP/2 connection to Exchange frame data independent bidirectional sequence, logically can be regarded as a more complete interactive processing unit, that is, to express a complete resource request-response data exchange process; a business processing unit that is processed within a stream, The end of this flow life cycle.

Features are as follows:

    • One HTTP/2 connection can hold multiple open streams at the same time, either end point swap frames

    • Streams can be created and used by clients or servers individually or by sharing

    • The stream can be closed by either end

    • Sending and receiving data within a stream is in order

    • Flow identifier natural number representation, 1~2^31-1 interval, terminal assignment with Create stream

    • The flow is logically parallel to the stream and exists independently.

2. Multiplexing

The concept of flow is proposed in order to realize multiplexing, and realize the transmission of multiple business unit data simultaneously on a single connection. The logical diagram is as follows:

650) this.width=650; "Src=" http://www.blogjava.net/images/blogjava_net/yongboy/Windows-Live-Writer/5264ea818301_ D4ed/one_http2_connection_thumb_1.png "/>

The actual transfer might be this:

650) this.width=650; "Src=" http://www.blogjava.net/images/blogjava_net/yongboy/Windows-Live-Writer/5264ea818301_ D4ed/http2_multiplexing_real_thumb.png "/>

Just see the frame, there is no flow (stream).

The need to abstract some, it is good to understand:

    1. Each frame can be thought of as a student, the stream can be considered a group (stream identifier is the attribute value of the frame), a class (a connection) within the student is divided into several groups, each of which assigns different specific tasks.

    2. http/1.* a request-response, establishes a connection, runs out of close; each group task needs to establish a class, multi-group tasks multiple classes, 1:1 scale

    3. http/1.1 pipeling Solution for, a number of team tasks queued serialization single-threaded processing, the latter group of tasks waiting for the previous team to complete the task to get the opportunity to execute, once the task processing time-out, and so on, the follow-up task can only be blocked, no way, that is, people often say the thread block

    4. HTTP/2 multiple team tasks can be performed concurrently (in a strictly concurrent way) within the class. Once a team task is time-consuming, it does not affect the normal execution of other team tasks

    5. For a class of resources maintenance than a number of class resources to maintain a more economical, which is the reason for multiplexing to occur

So simple to comb, there are some small clear.

3. Composition of the stream

The concept of flow is presented in order to achieve multiplexing. Impact factors:

    1. The priority attribute of the stream is suggested that the terminal (client + server side) needs to be properly allocated according to the priority value, the priority needs to be processed first, the lower priority can be queued up slightly, and such a mechanism can guarantee the priority processing of important data.

    2. The number of concurrent flows (or the number of streams that exist at the same time) is not less than 100 in the initial environment

    3. The flow control valve coordinates the utilization of the network bandwidth resource, which is presented by the receiving end to obey its rules.

    4. The flow has a complete life cycle, from creation to final shutdown, through different stages

The overall composition of the flow is as follows:

650) this.width=650; "Src=" http://www.blogjava.net/images/blogjava_net/yongboy/Windows-Live-Writer/5264ea818301_ D4ed/http2_stream_thumb.png "/>

Figuring out the relationship between flow and multiplexing, here's a little bit deeper, learning some of the details of the flow.

Two. The property of the stream 1. Flow Status/life cycle

650) this.width=650; "Src=" http://www.blogjava.net/images/blogjava_net/yongboy/Windows-Live-Writer/5264ea818301_ D4ed/http2%20stream%20status_thumb.png "/>

The behavior of the frame and the END_STREAM flag bit will change the state of the convection. Because the stream is created independently by each end, there is no negotiation, and the negative consequence is that (the state of the stream that cannot be matched at both ends) causes the "off" state to be restricted after the Rst_stream frame is sent, because the transfer and reception of the frame takes a little time.

Status List of frames:

    1. Idle, start status value for all streams

    • Send/Receive headers frame, enter open state

    • Push_promise frames can only be sent on an existing stream, resulting in the creation of a local push stream in the "resereved (local)" state

    • Receive Push_pormise frames on an existing stream, causing a local reserved stream to be in the "resereved (remote)" state

    • Headers/push_promise frame and 0 or more continuation frames in the back, as long as the END_STREAM flag bit is carried, the flow status will enter the "half closed" state

    • Only receive headers and priority, otherwise protocol_error type connection error

Reserved, reserving a stream for push to be used later

    • Only Window_update, Rst_stream, priority frames can be sent

    • Only receive Rst_stream, priority, headers frames

    • Only headers, Rst_stream, priority frames can be sent

    • Only receive Rst_stream, priority, window_update frames

    1. Reserved (local), the server side sends out a push_promise frame local reservation for the state of the push stream

    2. Reserved (remote), the client receives a push_promise frame, a locally reserved one to receive the status of the push stream

      Do not satisfy the condition, need to report Protocol_error type connection error

Open, which is used to send frames at both ends, requires the peer to send the data to obey traffic control notices.

    • Each end can send a frame containing the END_STREAM flag bit, causing the flow to enter the "half closed" state

    • Each end can send Rst_stream frames, flow into the "closed" state

Half closed

    • The Flow Control window is not maintenance-free

    • Can only receive Rst_stream, priority, window_update frame, otherwise reported stream_closed stream error

    • The terminal can send any type of frame, but it needs to obey the traffic control limit of the current stream on the end

    • Once the frame containing the END_STREAM flag is sent, the "closed" state is entered

    • Cannot send window_update,priority and rst_stream frames

    • Can receive any type of frame

    • The receiver can ignore the window_update frame, and may receive a frame with the End_stream flag immediately after the next

    • Receive priority key frames, which can be used to change the priority order of the dependent streams, some of which are small and complex

    • Once a frame containing the END_STREAM flag bit is received, the "closed" state is entered

    1. Half closed (local), sending one end containing the END_STREAM flag bit frame, stream into the local semi-closed state

    2. Half closed (remote), receives the end of the frame containing the End_stream flag bit, flows into the remote semi-closed state

      Once the Rst_stream frame is received or sent, the stream enters the "closed" state.

Closed, the final shutdown state of the stream

    • Allow only priority frames to be sent, reorder dependent closed streams

    • After the terminal receives the Rst_stream frame, it can only receive the priority frame, otherwise the stream_closed stream error is reported.

    • The received Data/headers frame contains the END_STREAM flag bit, which can receive window_update or Rst_stream frames in a short period, and should be treated as an error after timeout.

    • Terminal must ignore Window_update or Rst_stream frames

    • After the terminal sends a rst_stream frame, any received frames must be ignored

    • After the Rst_stream frame is sent, the traffic is restricted to the data frame, and the Traffic control window is connected to processing. Although these frames can be ignored, they are sent before the sender receives Rst_stream, but the sending side considers these frames to be inconsistent with the traffic control window.

    • The terminal receives the Push_promise frame after sending the Rst_stream, although the related stream has been reset, but the push frame can also make the stream a "hold" state. Therefore, a rst_stream frame can be used to close an unwanted commitment stream

Requirements are as follows:

    1. Connection error handling as a protocol error (PROTOCOL_ERROR) type is required for frames that are not allowed to appear in a specific state

    2. In any state of the stream, the priority frame can be sent or received

    3. Unknown frames can be ignored

2. Stream identifiers
    1. 31 bytes representing an unsigned integer, 1~2^31-1

    2. The client-created stream is represented by an odd number, and the server-side creation stream is represented by an even number

    3. 0x0 is used to indicate the connection control flow, not to create a new stream

    4. Switch upgrade via http/1.1 101 Protocol switch to http/2,0x1 the referred generation Stream is in "half closed (local)" and cannot be used to create a new stream

    5. The identifier of the new stream is greater than the identifier of the stream that already has the stream and the reservation

    6. When the new stream is first used, a stream that is below this identifier and is idle "idle" will be closed

    7. The used stream identifier cannot be reused

    8. If the stream identifier of the terminal is exhausted

    • If the client needs to close the connection, create a new connection to create a new stream

    • If the server side, you need to send a goaway frame to notify the client, forcing it to open a new connection

3. Number of concurrent flows
    1. Each end can send a settings frame with the Settings_max_concurrent_streams parameter to limit the maximum concurrency of the peer-to stream

    2. Compliance with terminal maximum concurrency limit after peer reception

    3. Flows with a status of "open" or "half closed" need to count in the total limit

    4. Reserved State "reserved" stream is not counted in the total limit

    5. The terminal receives a headers frame that causes the total number of streams created to exceed the limit, which needs to respond to the Protocol_error or Refused_stream error, which is required depending on whether the terminal can be detected to allow automatic repetition retry

    6. The terminal wants to reduce the upper limit of the active stream set by the Settings_max_concurrent_streams, if it is lower than the current already open stream, you can select the light than the overflow flow or allow the flow to continue to exist until the completion

4. Priority of the stream

The priority of the stream is to allow the terminal to express to the peer the expression expected to give more resources to support the specific flow of views expressed, there is no guarantee that the end will adhere to the non-mandatory requirements of the proposal; default value 16. When resources are limited, the transmission of basic data can be guaranteed.

Change the priority level:

    1. The terminal can include the precedence priority attribute in the headers frame passed by the new stream

    2. Priority properties of a stream can be set exclusively by the precedence frame alone

5. Flow Dependency
  1. There is a dependency and dependency between the stream and the stream. The default dependency stream for all streams is 0x0, and the push stream relies on the associated flow of the transport push_promise.

  2. Depending on the weight value of the 1~256 interval, the resource should be allocated according to the weight than the column for child nodes that depend on the same parent.

  3. For child nodes that depend on the same parent stream, the associated weight value is specified, as well as the allocation weight of the available resources. The order between child nodes is not fixed.

    A A/\ ==>/| b C B D c
  4. Once the exclusive flag is set (exclusive flag), a horizontal dependency is inserted for the existing dependency, and its parent stream can only be relied upon by the new stream being inserted. For example, stream D sets the exclusive flag and relies on stream a:

    A A | /\ ==> D B c/b C
  5. Stream relies on a tree model, the underlying stream can only be assigned to resources until the upper stream is closed or not functioning/invalidated

  6. Stream cannot depend on itself, otherwise protocol_error stream error

  7. In the flow-dependent tree model, the parent node priority, and the addition of the exclusive dependency stream, will cause the priority to be re-ordered

        ?                 ?                ?                  ?     |               /  \               |                  |     A              D    a              d                  d   / \             /   / \             / \                 |  b   c     == >  F   B   C   ==>    F    A       OR      A      / \                  |             / \              /|\    d    E                 e            b   c            B C F    |                                       |              |    F                                       E              E                 (Intermediate)     (non-exclusive)      (exclusive)
6. Flow Priority State Management
    1. Flow dependent tree model, any node is removed, need to rebuild priority order, reallocate resources

    2. Terminal recommends retention of priority information for a period of time when the stream is closed, reducing potential assignment errors

    3. The "idle" state stream can be assigned the default priority of 16, which can become the parent of other streams, and can be assigned a new priority value

    4. The flow priority information held by the terminal is not subject to settings_max_concurrent_streams restrictions, but may cause terminal state maintenance to be limited to a limit of no more than the number defined by Settings_max_concurrent_streams

    5. The maintenance of priority state information can be discarded when the load is high, to reduce resource consumption.

    6. If the terminal has the ability to retain sufficient state to modify the priority of the closed stream when it receives the key frame purpose, the priority order can be reconstructed for its child nodes

7. Flow control

Multiplexing introduces resource contention, and traffic control ensures that the flow does not affect each other seriously. Flow control is implemented using window_update frames, which can be used for a single stream as well as an entire connection. Some of the principles are as follows:

    1. Jump, with directionality

    2. Can not be banned

    3. The initial window value is 65535 bytes, which is valid for a single stream and for the entire connection

    4. Based on the window_update frame transfer implementation, the receiving end advertises the number of bytes to be received on the stream/connection

    5. Receive-side Full Control permission, the receiving side can advertise the window value for the stream/connection, the sender needs to follow

    6. Currently only the data frame can be controlled by the flow, only for its payload calculation, the window value is exceeded, its payload can be empty

Things to note:

    1. Flow control is to solve the problem of thread-head blocking, while in the case of resource constraints to protect some operations smoothly, for a single connection, a stream may be blocked or slow processing, but at the same time will not affect the other stream on the transfer of data

    2. Although traffic control can be used to limit the memory consumed by a peer, it may not be possible to take full advantage of network resources without knowing the network bandwidth delay product

    3. Flow control mechanism is complex, need to consider a lot of details, the implementation is very difficult

Three. Summary

The flow concepts and properties defined in the HTTP/2 specification are complex, and in the case of large requests and massive concurrency, the entire connected traffic control + flow control of a single flow + flow state + Stream Priority attribute + priority state + flow-dependent tree Model A series of new features may result in:

    1. Server-side/client single-connection memory is high and the cost of maintaining a long connection is several times greater than ever

    2. Flow control is a complex function, the implementation of bad results will result in one end of the traffic window value has been exhausted, need to wait for the client to send a new flow control window value, if there is hot data to send, need to wait for the cost, virtually add additional interactive steps

    3. Flow dependency and priority reordering, and so on, virtually increasing the complexity of the program, processing is not good to trigger a potential bug

    4. For performance and memory considerations, many well-known applications do not necessarily have the power to achieve all the features, some of the advanced features of the stream are somewhat idealistic, such as the current implementation list: Https://github.com/http2/http2-spec/wiki/Implementations, you can see one or two

    5. Actual non-browser environments, such as HTTP APIs, actually require only some of the key features, which is a logical choice

    6. All States need to maintain, regardless of horizontal or vertical expansion need to pay attention to, no state is the most conducive to expansion



HTTP/2 note streams and multiplexing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.