HTTP 2.0 Detailed Introduction _ Related Tips

Source: Internet
Author: User
Tags ack base64 http request http 2 port number cipher suite advantage

In our internet world, the HTTP protocol is the most widely used network protocol. The recent birth of http2.0 makes it the focus of the Internet technology circle again. The retreat of anything and the rebirth of new life have its power behind it. For HTTP, this power is complicated by the evolution of various technical details, simply the evolution of user experience and perception. Users always want the information on the network to reach the eyeball as soon as possible, the faster the better, it is this to "fast" to the pursuit of the birth of today's http2.0.

1. HTTP2.0 's Past Life

Http2.0 's past lives were the brothers of http1.0 and http1.1. Although there were only two versions before, the two versions contained a large protocol that would be enough to give any experienced engineer a headache. Http1.0 was born in 1996, and the agreement document is 60 pages. In the third year, http1.1 was born, and the protocol document swelled to 176 pages. However, unlike our mobile app upgrades, the new version of the network protocol does not immediately replace the old version. In fact, 1.0 and 1.1 have been around for a long time, as a result of slow network infrastructure updates. Today's http2.0 is also the same, the new version of the agreement will also need the industry's products to temper, the need for infrastructure to be more than the monthly upgrade to popularize.

1.1 http standing on top of TCP

Be sure to have a basic understanding of TCP before you understand the HTTP protocol. HTTP is based on TCP protocol, TCP protocol as a transport layer protocol is not far from the application layer. The bottleneck of HTTP protocol and its optimization techniques are based on the characteristics of TCP protocol itself. For example, when TCP establishes a connection with a three handshake with a delay of 1.5 RTT (round-trip time), the application tier chooses the HTTP long link scheme for different policies in order to avoid the delay caused by the handshake at each request. TCP, for example, has a slow start (slow start) feature at the beginning of a connection, so reuse of the connection is always better than the new connection performance.

1.1 HTTP Application Scenario

The beginning of HTTP was mainly used for Web content acquisition, when the content is not as rich as it is now, typesetting is not so beautiful, the user interaction scene is almost no. HTTP is pretty good for this simple scenario of getting web content. But with the development of the Internet and the birth of web2.0, more content began to be displayed (more picture files), typesetting became more exquisite (more CSS), more complex interaction was introduced (more JS). The amount of data loaded and the number of requests that a user opens on the home page of a Web site is increasing. Today most of the portal home page size will be more than 2M, the number of requests can be up to 100. Another wide application is in the mobile Internet client app, the different nature of the app for HTTP usage varies greatly. For the electric Business app, the request to load the home page may be as many as 10. Im,http requests for micro-letters may be limited to downloads of voice and picture files, and the frequency of requests is not high.

1.2 Because of the delay, so slow

The main factors that affect a network request are two, bandwidth and latency. Today's network infrastructure has greatly improved bandwidth, and most of the time latency is affecting response speed. Http1.0 is the most complained of the connection can not be reused, and head of the line blocking these two issues. Understand these two problems have a very important premise: the client is based on the domain name to the server to establish a connection, the general PC-side browser will be for a single domain name of the server at the same time establish 6~8 connection, the number of mobile phone-side connections are generally controlled in the 4~6. Obviously the number of connections is not as much as the better, and the resource overhead and overall latency will increase.

The inability to reuse the connection causes each request to undergo three handshake and slow start. The three handshake is more obvious in the high latency scene, and slow start has a great effect on the large request of file type.

Head's line blocking can cause bandwidth to be underutilized, and subsequent health requests are blocked. Suppose 5 requests are issued simultaneously, as shown in the following figure:

For the implementation of the http1.0, before the first request has not received a reply, the subsequent request from the application layer can only be queued, request 2,3,4,5 can only wait for the request of 1 response back to be issued one by one. When the network is unobstructed, the performance impact is not very good, once requests 1 request for what reason did not arrive the server, or response because the network blocking did not return in time, the influence is all subsequent requests, the problem becomes more serious.

1.3 Resolution connection cannot be reused

Connection:keep-alive can be set in the http1.0 protocol head. Set keep-alive in header can be reused in a certain period of time, the length of the specific multiplexing time can be controlled by the server, generally around 15s. The default value for connection after http1.1 is keep-alive, and if you want to turn off connection multiplexing, you need to explicitly set the Connection:close. The connection reuse over a period of time helps a lot with the experience of PC-side browsers, as most requests are concentrated within a short period of time. But to mobile app, the result is not very effective, app side request is more dispersed and time span is relatively big. So mobile apps typically look for other solutions from the application tier, long connection schemes or pseudo long connection schemes:

Scenario One: Long links based on TCP

Now more and more mobile-end app will build its own long link channel, the implementation of the channel is based on TCP protocol. The technology of socket programming based on TCP is more complicated, and it needs to make an agreement, but the return is very great. The escalation and push of information becomes more timely, and the server pressure is reduced at a point in time when the volume of requests bursts (HTTP short connection mode creates and destroys connections frequently). More than just IM app has such a channel, such as Taobao type of electric business app has its own exclusive long connection channel. Now the industry also has a number of mature schemes to choose from, Google's protobuf is one of them.

Programme II: HTTP long-polling

Long-polling can be shown in the following figure:

The client sends a polling request to the server in its initial state, and the server does not return the business data immediately, but waits for the new business data to come back when it is generated. So the connection will always be maintained, and once the end will start a new polling request, so repeated, so there will always be a connection is maintained. When the server has new content, there is no need to wait for the client to establish a new connection. While the approach is simple, there are some challenges that need to be overcome to achieve a stable and reliable business framework:

    1. Compared to traditional HTTP short links, long connections greatly increase the server pressure when the user grows
    2. Mobile-end network environment complex, such as WiFi and 4g network switching, the elevator led to the temporary network off, and so on, these scenarios need to consider how to rebuild a healthy connection channel.
    3. The stability of this polling is not good, the reliability of the data needs to be guaranteed, such as the re-hair and ack mechanism.
    4. Polling response may be the intermediary agent cache, to deal with the expiration of business data mechanism.

Long-polling way There are some shortcomings can not be overcome, such as each new request will carry a duplicate header information, as well as the data channel is one-way, the initiative is in the server side, the client has a new business request can not be delivered in time.

Programme III: HTTP streaming

The HTTP streaming process is roughly as follows:

Unlike long-polling, the server does not end the initial streaming request, but continues to return the latest business data through this channel. Obviously, this data channel is also one-way. Streaming is adding "Transfer encoding:chunked" to the server response's head to tell the client that there will be new data coming. In addition to the same difficulties as long-polling, streaming has several drawbacks:

Some proxy servers wait for the server's response to end before pushing the results to the requesting client. For streaming this will never end, the client will always be in the process of waiting for response.

Business data can not be divided according to the request, so the client confiscated a piece of data need to do their own protocol resolution, that is, to do their own protocol customization.
Streaming does not produce duplicate header data.

Scenario Four: Web sockets

WebSocket is similar to the traditional TCP socket connection and is based on the TCP protocol, providing bidirectional data channel. The advantage of the websocket is that it provides the concept of message, simpler to use than a byte-throttling TCP socket, while providing a long connection feature that is missing from traditional HTTP. But WebSocket is relatively new, drafted in 2010 years, and not all browsers provide support. The latest versions of the major browser vendors provide support.

1.4 Resolution head of the line blocking

Head of the line blocking (hereinafter referred to as HOLB) is the greatest source of trouble for the network experience before http2.0. As shown in Figure 1, health requests are affected by unhealthy requests, and the loss of this experience is influenced by the network environment and is random and difficult to monitor. In order to solve the delay caused by Holb, the Protocol designer has designed a new pipelining mechanism.

HTTP pipelining

The pipelining flowchart can be shown in the following figure:

The biggest difference compared to figure one is that the request 2,3,4,5 does not have to wait for the request 1 response to return before issuing, but almost at the same time to send the request to the server. 2,3,4,5 and all subsequent requests to share the connection save the wait time and greatly reduce the overall latency. The following diagram shows a clear picture of this new mechanism's change in latency:

But pipelining is not the Messiah, and it has many flaws:

    1. Pipelining can only be applied to http1.1, in general, servers supporting http1.1 require support for pipelining.
    2. Only idempotent requests (Get,head) can use pipelining, non idempotent requests such as Post cannot be used, because there may be a sequential dependency between requests.
    3. The head's line blocking is not fully resolved, and the server's response is required to return sequentially, following the FIFO (in-in-one) principle. In other words, if the request of the 1 response did not return, 2,3,4,5 's response will not be sent back.
    4. The vast majority of HTTP proxy servers do not support pipelining.
    5. There is a problem with the old server negotiation that does not support pipelining.
    6. May cause a new front of queue blocking problem.

Because there are so many problems, the major browser vendors either do not support pipelining at all, or by default, the pipelining mechanism is turned off, and the conditions are very stringent. You can refer to the chrome for pipeling problem description.

1.5 Other Chine

In order to solve the distress caused by the delay, there will always be a clever seeker to find a new shortcut. The flourishing of the internet has spawned a variety of novelty techniques, and we look at these shortcuts in turn and their pros and cons.

spriting (picture merge)

Spriting refers to a number of small pictures merged into a large picture, so many small requests are merged into a large picture request, and then use JS or CSS file to take out the small picture of the use. The benefits are obvious, the number of requests is reduced, the latency is naturally low. The downside is that the size of the file is getting bigger, and sometimes we just need one small picture but have to download the whole big picture, cache processing also become troublesome, in only a small picture expired, in order to get the latest version, have to download from the server complete large image, even if the other small picture has not expired, Apparently a waste of traffic.

inlining (content Inline)

Inlining's thinking angle is similar to spriting, which is to embed additional data requests into a general file after Base64 encoding. For example, a Web page has a background image, we can embed the following code:

Background:url (Data:image/png;base64,)

The data section is the bytecode of the Base64 encoding, thus avoiding an extra HTTP request. But this approach also has the same problem with spriting, resource files are bound to other files, the granularity becomes difficult to control.

concatenation (file merge)

Concatenation mainly for JS such files, now front-end development interaction more and more, scattered JS files are also becoming more. Merge multiple JS files into a large file the compression process can also reduce the amount of data that is delayed and transmitted. But also face the problem of large size, a small JS code changes will cause the entire JS file is downloaded.

Domain Sharding (header fragment)

As I mentioned earlier, it is important that browsers or clients create connections based on domain name. For example, for www.example.com, only 2 simultaneous connections are allowed, but mobile.example.com is considered to be another domain name, and two new connections can be established. And so on, if I set up a few more sub domains (subdomains), then I can create more HTTP requests, which is domain sharding. After the number of connections becomes more numerous, the restricted requests do not have to wait for the previous request to complete before they can be issued. This technique is used in large numbers, with a sizable number of Web page requests that can exceed 100, and the number of connections built after domain sharding can be as many as 50 or more.

This, of course, increases the consumption of system resources, but now the hardware resources upgrade is very fast, compared with the user's precious waiting time is insignificant.

Domain Sharding also has a big benefit, for resource files generally do not need cookies, these different static resource files scattered on different domain name servers, you can reduce the size of the request.

However, domain sharding only in the number of requests in a very many scenarios have a significant effect. and the number of requests is not the more the better, the resource consumption is one thing, the other is because TCP's slow start will lead each request to experience slow start, as well as TCP three times handshake, DNS query delay. The time loss that this part brings is as important as the request queue, how to balance the two requires a reliable number of connections between the median, the final determination of this value through repeated testing. The mobile end browser scenario does not recommend using domain sharding, as detailed in this article.

2. Pathfinder Spdy

http1.0 and 1.1 Although there are so many problems, the industry has come up with a variety of optimization means, but these methods are trying to circumvent the agreement itself, there is a kind of nowhere near, symptoms do not cure the feeling. It was not until 2012 that Google, such as thunder, proposed the Spdy plan, we began to look at and solve the old version of the HTTP protocol itself, which directly accelerated the birth of http2.0. In fact, http2.0 is discussed and standardized with Spdy as its prototype. To make way for http2.0, Google has decided not to continue supporting spdy development in 2016, but before Http2.0 was born, Spdy had already had a fairly large application, and as a transitional program I'm afraid it will continue for some time. Now many app clients and servers have used Spdy to enhance the experience, http2.0 in the old devices and systems can not be used (iOS system only on the ios9+ support), so the next few years Spdy will and http2.0 common services.

The goal of 2.1 Spdy

The goal of Spdy was to aim at the http1.x pain point at the outset, namely latency and security. We're talking about delays all over the top, as for security, because HTTP is a plaintext protocol, its security has been criticized by the industry, but this is another big topic. If the goal is to reduce the latency, the HTTP and Transport layer of the application layer of TCP are all have to adjust the space, but TCP as a more low-level protocol exists for more than dozens of years, in fact, is now deeply rooted in the global network infrastructure, if you want to move the inevitable injury through the bone, the industry response is necessarily not high, So Spdy's scalpel is aimed at http.

Reduce latency, the client's single connection single request, server's FIFO response queue are the bulk of the delay.

HTTP was originally designed to be a client-initiated request, and then the server responded that the server could not actively push content to the client.

The header of compressed HTTP header,http1.x is becoming more and more bloated, and the cookie and user agent can easily increase the size of the header to 1kb or more. And because of the stateless nature of HTTP, headers must be repeated each request, a waste of traffic.

To increase the likelihood of an industry response, smart Google initially avoided the move from the transport layer and intended to use the power of the open source community to increase proliferation, and for the Protocol users, it was only necessary to set the user agent in the header of the request, Then support on the server side can greatly reduce the difficulty of deployment. The design of Spdy is as follows:

Spdy is located under HTTP, TCP and SSL, so that it is easy to compatible with the old version of the HTTP protocol (encapsulating the contents of http1.x into a new frame format) while using existing SSL features. Spdy features can be divided into basic functions and advanced functionality, the basic functionality is enabled by default, and advanced features need to be enabled manually.

Spdy Basic Functionality

Multiplexing (multiplexing). Multiplexing, which shares a TCP connection through multiple request stream, solves the problem of the http1.x Holb (Head's line blocking), reduces latency and increases bandwidth utilization.

Requested priority (request prioritization). A new problem with multiplexing is that it can cause critical requests to be blocked on the basis of connection Sharing. Spdy allows the priority to be set for each request, so that important requests are prioritized. For example, browser loaded home page, the first page of the HTML content should be shown first, after all the static resource files, script files, such as loading, so that users can see the first time the content of the Web page.

Header compression. Several http1.x headers have been mentioned a few times before and are often redundant. Select the appropriate compression algorithm to reduce the size and number of packets. The compression ratio of spdy to header can reach more than 80%, and it has great effect in low bandwidth environment.

Spdy Advanced Features

Server push (server drive). Http1.x can only be initiated by the client, and then the server passively sends response. After the server push is turned on, the server informs the client that new content is pushed through the x-associated-content header (X-Beginning headers are non-standard, custom headers). When the user first opens the homepage of the website, the server will actively push the resources to improve the user experience greatly.
Server hint (server hint). Unlike the server push, server hint does not actively push content, simply tells the new content to be generated, and the download of the content requires the client to initiate the request. Server hint through the X-subresources header to notify, the general scenario is that the client needs to query the server state before downloading resources, can save a query request.

The results of 2.2 spdy

Spdy's results can be illustrated with a Google official figure: page load time is 64% less than http1.x. and the major browser manufacturers in the Spdy after the birth of more than 1 years to support the Spdy, many manufacturers app and server-side framework will be spdy applied to the line of products.

Google's website also gives a test data that they have made themselves. The test object is 25 top-ranked Web sites, the loss rate of home network%1, each site test 10 times average. The results are as follows:

When SSL is not turned on, it is promoted to 27%–60% and 39%–55% after opening. The results of this test are two points worth paying special attention to:

Selection of the number of connections

Whether the connection is based on the domain name to establish, or do not distinguish all subdomains share a connection, this strategy choice is questionable. Google's test results test two scenarios to see that the result seems to be a single connection performance higher than a multiple-domain connection. This situation occurs because all resource requests for a Web page are not issued at the same time, and subsequent requests for a child domain name will, of course, perform better if the previous TCP connection is reused. The actual application scenario should also be a single connection Sharing mode performance well.

The impact of bandwidth

The test is based on two bandwidth environments, one at a slow speed. Faster speed in the environment to reduce the delay in the promotion of greater, single connection mode can be raised to 60%. The reason is also simpler, the greater the bandwidth, the faster the request for a multiplexed connection is completed, the longer the delay loss caused by the three handshake and slow start becomes more apparent.

Out of the connection mode and bandwidth, the packet loss rate and RTT are also parameters that need to be tested. Spdy to the header compression has more than 80%, the overall packet size can be reduced by about 40%, the less packets sent, the natural loss of packet rate will affect the smaller, so the loss of packet rate in the harsh environment spdy but more can enhance the experience. The following figure is the result of a test that is affected by the packet loss rate, which has not been promoted after the packet loss rate exceeds 2.5%:

The larger the RTT, the greater the latency, and in the case of high RTT, because the spdy request is concurrent, all of the packages are more efficient, and the overall latency is significantly reduced. The test results are as follows:

Spdy from 2012 to 2016 stop maintenance, the time span for the network protocol is actually very short. If HTTP2.0 does not come out, Google may be able to collect more real feedback and data on the industry's products, after all, Google's own test environment is relatively simple. But Spdy also completed his mission, as the pioneering role of Google should have foreseen this outcome. Spdy to the Product network experience promotion in the end how, I am afraid only the major manufacturers of product managers are clear.

3. Savior HTTP2.0

Spdy's birth and performance illustrate two things: first, in the context of existing Internet infrastructure and the wide use of HTTP protocols, it is possible to modify the protocol layer to optimize http1.x. The second is that the modification of the http1.x is really effective and the industry feedback is good. It is these two points that let the IETF (Internet enginerring Task Force) began to formally consider the development of HTTP2.0 plans, and finally decided to spdy/3 the blueprint for the drafting of Http2.0,spdy part of the designers were invited to participate in the HTTP2.0 design.

3.1 HTTP2.0 issues to consider

HTTP2.0 and Spdy's starting point is different, spdy can be said to be Google's "Toys", the first appeared in their own Chrome browser and server, it is not fun and others will not follow the game to Google is irrelevant. But HTTP2.0 as the industry standard has not been born is the focus of attention, at the beginning if there are any flaws or incompatible problems may be decades of impact, so consider the problem and the angle is very wide. Let's look at some important design prerequisites for HTTP2.0:

    1. The basic model that the client sends request to the server does not change.
    2. The old scheme will not change, the service and application using http://and https://will not make any changes, there will be no http2://.
    3. Clients and servers that use http1.x can seamlessly transfer to http2.0 via proxy.
    4. Proxy servers that do not recognize http2.0 can demote requests to http1.x.

Because the client and server have to confirm whether or not to support http2.0 before establishing the use of http1.x or http2.0, there must be a negotiation process here. The simplest negotiation also needs to have a ask a answer, the client asks the server to answer, even this simplest way also has one RTT delay, we want to revise http1.x is to reduce delay, obviously this RTT is unacceptable to us. Google has also encountered this problem when making spdy, and their approach is to force Spdy to take HTTPS and complete the negotiation process on the SSL layer. The SSL layer negotiation is the most suitable carrier before the HTTP protocol is communicated. Google has developed a TLS extension, called NPN (Next Protocol negotiation), which, as you can see from the name, is primarily designed to negotiate the next protocol to be used. HTTP2.0 Although the same way, but HTTP2.0 after intense discussion, eventually did not force HTTP2.0 to go to the SSL layer, most browser vendors (except IE) only implemented the 2.0 protocol based on HTTPS. HTTP2.0 does not use NPN, but another extension of TLS called ALPN (Application Layer Protocol). Spdy also intends to migrate from NPN to ALPN.

Each browser (except IE) only realizes the HTTP2.0 based on SSL, another reason is that the success rate of the SSL request will be higher, the SSL package will not be monitored and modified, so that network devices in the network can not be based on http1.x cognitive interference to modify the request,http2.0 If the request is accidentally modified, the success rate of requests will naturally decrease.

The HTTP2.0 protocol does not enforce SSL because it hears a lot of opposing sounds, after all, HTTPS and HTTP are not optimized for performance, and it takes a lot of effort to optimize HTTPS to a level that hardly increases latency. The IETF has compromised on this dilemma, but most browser vendors (IE) do not buy accounts, they only recognize https2.0. For app developers, they can insist on using a http2.0 without SSL, but bear an extra RTT delay and the cost of a request that could be compromised.

3.1 HTTP2.0 Major changes

HTTP2.0 as a new version of the protocol, changes in the details will be many, but for the application of developers and service providers, the impact of a larger number of points.

New binary format (Binary format)

Http1.x was born in the clear text of the protocol, its format consists of three parts: Start line (request line or status line), Header,body. To identify these 3 parts is to do protocol resolution, http1.x parsing is based on text. There are natural defects in the format parsing based on text protocol, there is a variety of forms of text, there must be a lot of scenarios to be considered for robustness, the binary system is different, only the combination of 0 and 1 is recognized. Based on this consideration, the HTTP2.0 protocol parsing decides to adopt binary format, which is convenient and robust.

Some people may feel that text-based http debugging is much more convenient, and many tools like Firebug,chrome,charles can instantly debug the modification request. In fact, many requests are now HTTPS, to debug HTTPS requests must have a private key to do. http2.0 most of the request should be to go https, so debugging convenience can not be considered as a powerful factor. Curl,tcpdump,wireshark These tools are better suited to http2.0 debugging.

http2.0 defines a frame with a binary format, and the http1.x format contrasts the following figure:

HTTP2.0 's format defines a way closer to the TCP layer, the two mechanism is highly efficient and streamlined. Length defines the beginning to the end of the entire frame, type defines the types of the frame (altogether 10), flags defines some important parameters with bit bits, the stream ID is used as flow control, and the remaining payload is the body of the request.

Although it seems that the protocol format and http1.x is completely different, in fact, http2.0 did not change the semantics of http1.x, but the original http1.x header and the body part of a layer with a frame again. When debugging, the browser will even automatically revert the http2.0 frame to the http1.x format. The specific protocol relationship can be shown in the following figure:

Connection Sharing

One of the big challenges http2.0 to solve is multiplexing (multiplexing), which is connection sharing. The stream ID mentioned in protocol resolution above is used as a connection sharing mechanism. A request corresponds to a stream and assigns an ID, so that a connection can have multiple stream, each stream's frame can be randomly mixed together, the receiver can be based on the stream ID will be the frame into their different request.

As mentioned above, after connection sharing, the need for priority and request-dependent mechanisms is in order to solve the problem of blocking critical requests. Each stream in the http2.0 can be set with priority (Priority) and Dependency (Dependency). The priority stream is processed and returned to the client by the server, and the stream can rely on other sub streams. Both priority and dependency can be dynamically adjusted. Dynamic tuning is useful in some scenarios where you are supposed to use your app to browse products fast sliding to the bottom of the list of goods, but the previous request to send out, if not the next request priority set high, the user is currently browsing to the end of the picture to download complete, obviously experience did not set priority. Empathy can also be magical in some scenarios.

Header compression

Previously mentioned headers for http1.x because cookies and the user agent are easy to inflate, and are sent repeatedly each time. http2.0 use encoder to reduce the need to transfer the header size, the two sides of the communication Cache header fields table, not only to avoid duplicate header transmission, but also reduce the need to transfer the size. Efficient compression algorithms can greatly compress headers, reducing the number of packets sent to reduce latency.

Here to popularize a small knowledge point. Now everyone knows that TCP has the characteristics of slow start, three times after the handshake began to send TCP segment, the first can send the number of not ACK segment is determined by the initial TCP window size. This initial TCP window differs depending on the platform implementation, but is typically 2 segment or 4k (a segment is about 1500 bytes), which means that when you send a packet that is larger than this value, To wait for the previous packet to be ACK before sending a subsequent package, obviously the delay is higher. intial window is not the larger the better, too the Assembly will lead to network node congestion, the rate of loss can be increased, specific details may refer to the IETF this article. The HTTP header is now inflated to the possibility of exceeding the value of this intial window, so it is more important to compress headers.

The choice of compression algorithm

SPDY/2 is using the gzip compression algorithm, but later the two attack modes breach and crime so that even if the SSL SPDY can be cracked content, the last comprehensive consideration is a kind of compression algorithm called Hpack. These two vulnerabilities and related algorithms can click on the link to see more details, but this vulnerability is mainly in the browser side, as the need for JavaScript to inject content and observe payload changes.

Reset Connection Performance Better

Many app clients have the ability to cancel picture downloads, for http1.x, by setting the reset flag in the TCP segment to notify the end of the connection. This way the connection is disconnected, and the connection must be reconnected the next time the request is sent. Http2.0 introduces the Rst_stream type frame, it is better to cancel the stream of a request on the premise of continuous open connection.

Server Push

The Server push function has already been mentioned before, and http2.0 can push the content of the client's needs forward to the past, so also called "cache push." It is also noteworthy that if the client exits a business scenario and needs to cancel the server push for traffic or other factors, it can do so by sending the Rst_stream type frame.

Flow control

The TCP protocol uses the sliding window algorithm to do the flow control. The sender has a sending window and the receiver has receive window. The http2.0 control is similar to receive window, where the receiver of the data indicates how much data it can receive by telling the other's own flow window size. Only the data type frame has the ability to control the flow. For flow control, if the receiver still has more frame with the flow window zero, it returns the block type frame, which generally indicates a problem with the http2.0 deployment.

Nagle algorithm vs TCP delayed Ack

A classic scenario for TCP protocol optimization is: The Nagle algorithm and the Berkeley delayed ACK algorithm. Http2.0 did not make any changes to the TCP layer, so the high latency problem caused by this antagonism persists. Either disable the Nagle algorithm via Tcp_nodelay, or disable the delayed ACK algorithm by Tcp_quickack. Seemingly http2.0 official suggestion is to set up tcp_nodelay.

More secure SSL

HTTP2.0 uses the extension of TLS ALPN to do protocol upgrades, in addition to the encryption of this piece also has a change, HTTP2.0 security for TLS to do a close step, through the blacklist mechanism to disable hundreds of of the encryption algorithm is no longer secure, some encryption algorithms may still be used. If there is no intersection between the client and the server's cipher suite during the SSL negotiation process, the negotiation fails directly, causing the request to fail. Pay special attention to this when deploying http2.0 on the server side.

Negative energy in the 3.2 HTTP2.0

The ambiguous relationship between Spdy and HTTP2.0, as well as the creator of Google as Spdy, makes it easy for conspiracy theorists to question whether Google will be the ultimate beneficiary of the deal. This is nonsense, of course, Google will benefit, any new agreement users will benefit from, as for who eats meat, who drink soup to see their own skills. From the history of the entire agreement can also be rough to see that the birth of the new agreement is entirely aimed at the industry's existing problems to the right remedy, and there is no Google business related traces exist, Google to start to the end only played a role: you can your up.

HTTP2.0 will not be a panacea, but wiping will not have side effects. HTTP2.0 's biggest bright spot is multiplexing, and the benefits of multiplexing are obvious only in the context of a large HTTP request, so some people feel that it is only suitable for browsers browsing large sites. That's true, but the benefits of http2.0 are not just multiplexing, request compression, priority control, server push, etc. are all bright spots. For content-oriented mobile app, such as Taobao app,http request, multiplexing or can produce a significant improvement in experience. Multiplexing for delay changes can refer to this test URL.

HTTP2.0 's reliance on SSL makes some developers daunting. Many developers of SSL still remain in the impression of high latency, CPU performance loss, and cumbersome configuration. In fact, SSL in the HTTP combination of performance can be optimized to the extent of neglect, there are many articles on the internet can be referenced. HTTP2.0 can also do without SSL, some scenarios may not be appropriate for HTTPS, such as cache dependencies on proxy servers, and get requests that are not sensitive to content security can be optimized through proxy server caching.

The present situation of 3.3 HTTP2.0

HTTP2.0 as a new version of the network protocol will certainly take some time to popularize, but HTTP itself belongs to the application layer protocol, and the Network layer protocol IPV6 different from that year, the farther from the underlying protocol, the less impact on the network infrastructure hardware. HTTP2.0 even deliberately considered the compatibility with the http1.x, but in the http1.x below made a layer of framing layer, but also make its popularity of the resistance become smaller. So it's not surprising that HTTP2.0 's popularity rate could be far more than most people expect.

Firefox was detected in its browser traffic in 2015, with 13% of HTTP traffic already in use http2.0,27% HTTPS is also http2.0 and is in constant growth. The general user is not aware of the use of http2.0, but can install such a plug-in, after installation if the site is http2.0, at the far right of the address bar will have a lightning icon. You can also use this site to test. For developers, you can view protocol details through the web Developer Network, as shown in the following illustration:

Version:http/2.0 has clearly indicated the protocol type, Firefox also inserted in the header x-firefox-spdy: "H2", can also see whether to use http2.0.

The http2.0 traffic that Chrome detected in 2015 was about 18%. But that number would have been higher, since Chrome is now experimenting with quic (another piece of territory Google is opening). Chrome can also use a similar plugin to determine whether the site is using http2.0.

4. Mobile End HTTP Status

4.1 iOS under HTTP status

The iOS system is nsurlsession to support Spdy from the beginning of iOS8, and ios9+ automatically supports http2.0. In fact, Apple is very confident about http2.0, and the promotion is also very big. The new version of the ATS mechanism uses HTTPS for network transmission by default. APN (Apple Push notifiction) has also been implemented by http2.0 in IOS9. The nsurlsession in the IOS9 SDK defaults to http2.0 and is completely transparent to developers, even without an API to know which version of the HTTP protocol is used.

How do you configure the best HTTP Usage scenarios for developers? In my opinion, because of app vary, mainly from two aspects: one is the app itself HTTP traffic is large and dense, the second is the development of the team's own technical conditions. http2.0 deployment is relatively easy, client developers do not even have to make any changes, only need to use the IOS9 SDK compilation, but the disadvantage is that http2.0 can only be applied to iOS9 devices. Spdy's deployment is relatively cumbersome, but the advantage is that it can take care of ios6+ equipment. iOS-side spdy can use the COCOASPDY program developed by Twitter, but one thing needs to be handled specifically:

Because the TLS implementation of Apple does not support NPN, the use of spdy through NPN negotiation is not possible through the default 443 port. There are two approaches, one is that the client and server also agree to use another port number for NPN negotiation, and the second is the server side through the request header intelligence to determine whether the client supports Spdy and over the NPN negotiation process. The first method will be simpler, but you need to map all HTTP requests from the framework layer to another port,url mapping can refer to my previous article. Twitter's own website Twitter.com uses the second method.

Browser-side (such as Chrome), server-side (such as Nginx) are going to give up support Spdy, after all, Google officially announced to stop maintenance. Spdy will be a transition program that will disappear as the popularity of iOS9, so this part of the technical input needs to be measured by the development team.

4.2 Android under HTTP status

Android and iOS are similar, http2.0 can only support under the new system, Spdy is still necessary as a transition program.

For apps that use WebView, the chrome kernel-based webview is needed to support Spdy and http2.0, and the Android WebView is changed from android4.4 (KitKat) to the Chrome kernel.

For HTTP requests that are invoked using the native API, Okhttp is a viable solution that supports both Spdy and http2.0. If you use Alpn,okhttp to require the Android system 5.0+ (in fact, android4.4 has a ALPN implementation, but there are bugs, know that 5.0 is not formally fixed), if you use NPN, you can start from the android4.0+ support, But NPN is also part of the deal that will be eliminated.

Conclusion

These are some of the major changes in HTTP from 1.x to Spdy and then to HTTP2.0. HTTP2.0 is in the process of gradually applying to online products and services, it can be foreseen that there will be many new pits and corresponding optimization techniques, http1.x and Spdy will continue to play the residual heat for some time. As engineers, we need to understand the technical details behind these protocols in order to build a high-performance network framework to enhance our product experience.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.