HTTP protocol vs. different versions (1.0 1.1 2.0)

Source: Internet
Author: User
Tags http 2

HTTP 1.0

    • Short connection
      Each request establishes a TCP connection and disconnects immediately after the request is completed. This will cause 2 problems: The connection cannot be reused, head of line blocking
      A connection that cannot be reused causes each request to experience three handshake and slow boot. The three-time handshake is more noticeable in high-latency scenarios, and slow booting has a large impact on file class requests. Head of line blocking can cause bandwidth to be underutilized, and subsequent health requests are blocked.
      </br>
      HTTP 1.1
      Generated to address the pain point of HTTP 1.0.
    • Long connections
      Implemented via HTTP pipelining. Multiple HTTP requests can be reused for a TCP connection, and the server side handles the different request according to the FIFO principle
    • Add Connection Header
      This header is used to describe how the client connects to the server-side TCP and uses a short connection if connection is close and a long connection if the connection is keep-alive
    • Identity verification
    • State management
    • Cache caching and other mechanisms related to request headers and response headers
    • Add Host Header

HTTP 2.0

    • Multiplexing (multiplexing)
      Multiplexing allows multiple request-response messages to be initiated simultaneously through a single HTTP/2 connection. In the http/1.1 protocol, the browser client has a limited number of requests for the same domain name at the same time. Requests that exceed the limit number are blocked. This is one of the reasons why some sites will have multiple CDN domain names for static resources, for example, http://twimg.com, the purpose of which is to fix the blocking problem of browser requests for the same domain name in disguise. The HTTP/2 multiplexing (multiplexing) allows multiple request-response messages to be initiated simultaneously through a single HTTP/2 connection. So HTTP/2 can easily implement multi-stream parallelism without relying on establishing multiple TCP connections, and HTTP/2 reduces the basic unit of HTTP protocol communication to a single frame that corresponds to a message in the logical flow. Exchanges messages on the same TCP connection in parallel, in both directions.
    • Binary Sub-frame
      The HTTP/2 adds a binary sub-frame layer between the application layer (HTTP/2) and the Transport layer (TCP or UDP). Without altering http/1.x semantics, methods, status codes, URIs, and header fields, the performance limits of the HTTP1.1 are addressed, improving transmission performance, enabling low latency and high throughput. In the binary sub-frame layer, HTTP/2 divides all the transmitted information into smaller messages and frames, and encodes them in a binary format, where the header of the http1.x is encapsulated in the header frame, and the corresponding Request Body is encapsulated in the DAT A frame inside.

HTTP/2 communication is done on a single connection that can host any number of bidirectional data streams. In the past, the key to HTTP performance optimizations was not high bandwidth, but low latency. The TCP connection is self-tuning over time, initially limiting the maximum speed of the connection and, if the data is successfully transmitted, increases the speed of the transfer over time. This tuning is known as TCP slow start. For this reason, it is very inefficient to make HTTP connections that are inherently abrupt and transient. HTTP/2 can use TCP connections more efficiently by allowing all data streams to share the same connection, allowing high bandwidth to truly serve HTTP performance gains.

This single-connection multi-resource approach reduces the link pressure on the server, consumes less memory, has greater connection throughput, and improves network congestion due to a decrease in TCP connectivity, while slowing down the slow boot time makes congestion and packet loss recovery faster.

    • Header compression (header Compression)

http/1.1 does not support HTTP header compression, for which SPDY and HTTP/2 came into being, SPDY used a generic deflate algorithm, while HTTP/2 used a HPACK algorithm specifically designed for the compression of the first part.
* service-side push (Server push)

Server-side push is a mechanism for sending data before a client requests it. In HTTP/2, the server can send multiple responses to a request from a client. Server Push makes no sense when using embedded resources in the http1.x era; If a request is initiated by your page, the server will probably respond to the home page content, logo, and style sheet because it knows that the client will use it. This is equivalent to the collection of all the resources within an HTML document, but compared to the server push-back has a big advantage: can cache! It also makes it possible to share cache resources between different pages in the same origin.

HTTP protocol vs. different versions (1.0 1.1 2.0)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.