The difference between HTTP1.0, HTTP1.1 and HTTP2.0

Source: Internet
Author: User

Article turn from (Https://mp.weixin.qq.com/s/GICbiyJpINrHZ41u_4zT-A)

Author | A curious Mao

Address | Http://www.jianshu.com/p/be29d679cbff

Statement | This article is a curious Mao original, has been authorized to publish, without the original author's permission do not reprint

First, the history of HTTP

As early as HTTP was established, it was primarily to transfer Hypertext Markup Language (HTML) documents from the Web server to the client's browser. It is also said that for the front end, we write HTML page will be placed on our web server, the user side through the browser to access the URL address to get the Web page display content, but to WEB2.0, our page has become complex, not only simple words and pictures, but also our HTML page has css,javascript, to enrich our page display, when the advent of Ajax, we have a way to get data to the server side, which is actually based on the HTTP protocol. Also in the era of mobile Internet, our page can run in the mobile browser, but compared to the PC, the network situation of the mobile phone is more complex, which allows us to start the need to understand HTTP and constantly optimize the process.

Ii. Basic optimization of HTTP

There are two main factors that affect an HTTP network request: bandwidth and latency.

    • Bandwidth: If we're still on dial-up, bandwidth can be a serious problem for requests, but now that the network infrastructure has made the bandwidth much better, we're no longer worried about the bandwidth that's affecting the speed, then there's only a delay left.

    • Delay:

      • Browser blocking (HOL blocking): The browser blocks the request for some reason. For the same domain name, the browser can only have 4 connections (this may vary depending on the browser core), exceeding the maximum browser connection limit, and subsequent requests will be blocked.

      • DNS Lookup: The browser needs to know the IP of the destination server to establish a connection. The system that resolves the domain name to IP is DNS. This can usually be achieved by using the DNS cache results to reduce this time.

      • Establish the connection (Initial Connection): HTTP is based on the TCP protocol, the browser will be the third handshake at the same time to piggyback HTTP request packets to achieve a true connection, but these connections can not be reused will cause each request to experience three handshake and slow boot. The three-time handshake is more noticeable in high-latency scenarios, and slow booting has a large impact on file class requests.

Iii. some differences between HTTP1.0 and HTTP1.1

HTTP1.0 was first used in Web pages in 1996, at that time only to use some of the simpler web pages and Web requests, while HTTP1.1 in 1999 years began to be widely used in today's major browser network requests, while HTTP1.1 is currently the most widely used HTTP protocol. The main differences are mainly reflected in:

  1. cache processing , the main use of the header in the HTTP1.0 If-modified-since,expires as the criteria for caching, HTTP1.1 introduced more cache control strategies such as entity tag, If-unmodified-since, If-match, If-none-match, and more can be selected for the cache header to control the cache policy.

  2. bandwidth optimization and the use of network Connections , HTTP1.0, there are some waste of bandwidth phenomenon, such as the client just need a part of an object, and the server is the entire object sent over, and does not support the breakpoint continuation function, HTTP1.1 in the request header introduced a Range header field, which allows only a portion of the resource, which is the return code is 206 (partial Content), which facilitates the developer's freedom of choice to make full use of bandwidth and connectivity.

  3. Error Notification Management , 24 new error status response codes have been added to HTTP1.1, such as 409 (CONFLICT), which indicates that the requested resource is in conflict with the current state of the resource, and 410 (Gone) indicates that a resource on the server has been permanently deleted.

  4. Host header processing , in HTTP1.0, each server is considered to be bound to a unique IP address, so the URL in the request message does not deliver the hostname (hostname). However, with the development of virtual host technology, multiple virtual hosts (multi-homed Web Servers) can exist on a physical server, and they share an IP address. Both the HTTP1.1 request message and the response message should support the host header domain, and an error will be reported in the request message if there is no Host header field ("Bad Request").

  5. Long Connection , HTTP 1.1 supports long connection (persistentconnection) and request pipelining (pipelining) processing, multiple HTTP requests and responses can be delivered on a TCP connection, reducing the consumption and latency of establishing and shutting down the connection , the connection:keep-alive is turned on by default in HTTP1.1, which partly compensates for the disadvantage of creating a connection HTTP1.0 each request.

Iv. some differences between HTTPS and HTTP

    • The HTTPS protocol requires a certificate to be applied to the CA, and the general free certificate is very small and requires a fee.

    • The HTTP protocol runs on top of TCP, all the transmitted content is plaintext, HTTPS is running on SSL/TLS, SSL/TLS is running on TCP, and all the transmitted content is encrypted.

    • HTTP and HTTPS use a completely different connection, the same port, the former is 80, the latter is 443.

    • HTTPS can effectively prevent carrier hijacking, solve a big problem of anti-hijacking.

Five, the optimization of spdy:http1.x

2012 Google like Sound Thunder proposed the Spdy solution, optimized the http1.x request delay, resolved the http1.x security, the following:

    1. To reduce latency , the spdy gracefully takes Multiplexing (multiplexing) for HTTP high latency issues. Multiplexing uses multiple request streams to share a TCP connection, which solves the problem of hol blocking and reduces latency while increasing bandwidth utilization.

    2. Requests Priority (Request prioritization). A new problem with multiplexing is that it can cause critical requests to be blocked on the basis of connection Sharing. Spdy allows each request to be prioritized so that important requests are prioritized. such as browser loading home page, the first page of HTML content should first show, then is a variety of static resources files, script files, such as loading, so that users can be the first time to see the content of the Web.

    3. header compression. the header of the http1.x mentioned above is often repeated superfluous. Choosing the appropriate compression algorithm can reduce the size and number of packets.

    4. The transmission of encryption protocol based on HTTPS greatly improves the reliability of transmission data.

    5. service-side push (server push), using the Spdy Web page, For example, my Web page has a SYTLE.CSS request, while the client receives SYTLE.CSS data, the server pushes the Sytle.js file to the client, and when the client tries to acquire Sytle.js again, it can get it directly from the cache without having to send the request again. Spdy composition diagram:

Spdy is located above HTTP, TCP and SSL, which makes it easy to be compatible with older versions of the HTTP protocol (encapsulating the contents of http1.x into a new frame format) while using existing SSL capabilities.

Six, HTTP2.0 performance is amazing

HTTP/2: The future of the Internet Https://link.zhihu.com/?target=https://http2.akamai.com/demo is Akamai An official demonstration by the company to demonstrate a significant performance improvement in HTTP/2 compared to the previous http/1.1. At the same time, request 379 pictures, from the load times of the comparison can be seen HTTP/2 in the speed advantage.

Vii. upgraded version of Http2.0:spdy

HTTP2.0 can be said to be SPDY upgrade (in fact, originally also based on SPDY design), but, HTTP2.0 and SPDY still have different places, as follows:


The difference between HTTP2.0 and Spdy:

    1. HTTP2.0 supports plaintext HTTP transmission, while SPDY enforces HTTPS

    2. The compression algorithm for the HTTP2.0 message header uses HPACK http://http2.github.io/http2-spec/compression.html instead of SPDY DEFLATE Http://zh.wikipedia.org/wiki/DEFLATE

New features compared to HTTP2.0 and http1.x

    • The new binary format , http1.x parsing is text-based. The format of text protocol based on the existence of natural defects, the form of the text has a variety of manifestations, to achieve the robustness of the scene must be a lot of considerations, the binary is different, only 0 and 1 of the combination. Based on this consideration, the HTTP2.0 protocol resolution adopts binary format, which is convenient and robust.

    • Multiplexing (multiplexing), which is a connection share, that is, each request is used as a connection sharing mechanism. A request corresponds to an ID, so that a connection can have multiple requests, each connected request can be randomly mixed together, and the receiver can then attribute the request to a different service-side request based on the ID of the request.

    • header compression , as stated above, With a lot of information on the header of the http1.x mentioned earlier, and every time it is repeated, HTTP2.0 uses encoder to reduce the size of the header that needs to be transmitted, each of the parties caches a header fields table, It avoids the repetition of the header and reduces the size required for transmission.

    • service-side push (server push), like Spdy, HTTP2.0 also has the server push feature.

Nine, upgrade and transformation of HTTP2.0

    • The previous article said that HTTP2.0 can actually support non-HTTPS, but now the mainstream browser, like Chrome,firefox, or only support the HTTP2.0 protocol based on TLS deployment, so it is better to upgrade to HTTP2.0 or upgrade HTTPS first.

    • When your website has been upgraded to HTTPS, then upgrade HTTP2.0 is much simpler, if you use Nginx, as long as the configuration file to start the appropriate protocol, you can refer to the nginx white paper, nginx configuration HTTP2.0 official Guide HTTPS ://www.nginx.com/blog/nginx-1-9-5/.

    • Use HTTP2.0 then, the original http1.x How to do, this problem actually don't worry, HTTP2.0 fully compatible with http1.x semantics, for the browser does not support HTTP2.0, Nginx will automatically backwards compatible.

X. Notes

What is the difference between the multiplexing of HTTP2.0 and the long-link reuse in http1.x?

    • http/1.* a request-response, establish a connection, run out of close; Each request is to establish a connection;

    • http/1.1 pipeling Solution for, a number of requests queued serialization single-threaded processing, the subsequent request to wait for the return of the previous request to get the execution opportunity, once a request timed out, and so on, the subsequent request can only be blocked, no way, that is, people often say that the thread block;

    • HTTP/2 multiple requests can be executed concurrently on one connection. A request task is time consuming and does not affect the normal execution of other connections;
      Specific

What exactly is a server push?
Server-side push can send the client's required resources along with index.html to the client, eliminating the need for the client to repeat the request step. Because there is no initiating request, establishing a connection, and so on, so static resources through the service-side push way can greatly improve the speed. Specific as follows:

    • The normal client request process:

    • The process of server-side push:

Why do I need head compression?
Assuming that a page has 100 resources to load (which is quite conservative for today's web), and every request has a 1KB message header (which is also not uncommon, because of the existence of things like cookies and references), at least 100kb is required to get these headers. HTTP2.0 can maintain a dictionary with a difference in HTTP headers, greatly reducing the amount of traffic generated by the head transfer. Specific reference: HTTP/2 Head Compression Technology Introduction

How good is HTTP2.0 multiplexing?
The key to HTTP performance optimization is not high bandwidth, but low latency. The TCP connection is self-tuning over time, initially limiting the maximum speed of the connection and, if the data is successfully transmitted, increases the speed of the transfer over time. This tuning is known as TCP slow start. For this reason, it is very inefficient to make HTTP connections that are inherently abrupt and transient.
HTTP/2 can use TCP connections more efficiently by allowing all data streams to share the same connection, allowing high bandwidth to truly serve HTTP performance gains.

Xi. Reference

What are the major improvements in http/2.0 compared to 1.0?
In-depth study: What exactly is HTTP2 's real performance?
HTTP/2 Head Compression Technology Introduction

The difference between HTTP1.0, HTTP1.1 and HTTP2.0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.