Web performance in the HTTP/2 era

Source: Internet
Author: User

Web performance in the HTTP/2 era

In recent years, the topic of Web performance has warmed up, and everyone is beginning to realize its importance in the design process. Today, with the adoption of the new HTTP/2 protocol, the web has entered the HTTP/2 era.

The Web performance optimization techniques we've been familiar with all along are likely to be history. This sharing will take you through the current best practices, what technologies will change in the HTTP/2 era, and how to make the transition from 1 to 2 just right.

This article is intended for the sharing of PPT. The narrative may not be specific and not in place. Share: Lecture Web performance in HTTP/2 era

slider:http://slides.com/wujiarong/deck#/

0 wait and delay

In life, we are often in various waiting states.

When we open the browser. The situation is still not well improved.

Loading ...

No one wants to wait. Usually, waiting is a long time to wait.

You should have heard a friend say. Please wait for me, the result waited for more than 10 minutes.
You should have heard your girlfriend say. Soon enough, the result waited for nearly 1 hours.

No one likes to wait.

What is waiting:

The deferred behavior of certain actions is not taken until a certain point in time or an event occurs.

Latency 1 Web page performance

Definition: wiki Encyclopedia

Web performance refers to the which Web pages is downloaded and displayed on the user ' s web browser.

Thus. Web performance is divided into two parts: the speed of page loading and the speed at which pages are rendered.

The speed of page loading is often measured by time. That is, page load time. We usually say yes, Web performance.

In general, we say that Web page performance, is actually the page load time: page loading. As the name implies, this time is of course the shorter the better.

With the rapid development of Internet and web technology, we want to present more and more things to users. From the beginning of plain text, to a small number of pictures, to more pictures, even video.

To enhance the user experience, rendering content is presented to the user in a more friendly form. We've started adding a lot of CSS improvements, adding JS animations and CSS3 animations to add dynamic performance.

User angle so-called fast: Expect 2-3 seconds to load a Web page.

50% of the users will turn off the browser tab, if a page load time more than 4s.

It is understandable that the dynamic increase and the rendering optimization, greatly enhance the user experience. But the price that comes with it is that Web pages load more and more longer.

Research shows that 1s time of web latency will result in:

    • Reduce Web page views by as much as 11%
    • Reduce user satisfaction by 16%
    • 7% reduction in profit conversion

Let's look at a user experience table.

From:chapter 10. Primer on Web Performance

The non-official recognition of the Web performance community: Within 250MS, page loading is complete or can have as much visual feedback as possible to retain users!

Summary: The faster the better! 2 Web Performance Impact cases

Amazon: Web page loading speed increased by 100ms, sales decreased by 1% (reference: Amazon)

Walmart: 1s per optimization, 2% increase in profit conversion.

Akamai Research finds that:

    • 47% of users want the Web page to be completed in 2s nega
    • 40% of users will close the page if the page load time is longer than 3 seconds
    • 52% of online shoppers prefer to shop on a website that has a faster web load.
3 Global bandwidth Situation two data

Global average bandwidth (download): 21.3Mbps (21.3/8 MB/s)

Average global phone download speed: 10.9Mbps (10.9/8 MB/s)

From:netindex

Several pictures

Global Download Bandwidth Situation:

Europe:

North america:

South America:

Africa:

Bandwidth and round-trip time

RTT (round-trip time): round-trip delay. In the computer network, it is an important performance indicator, indicating that the data sent from the sender to start, to the sender to receive the acknowledgement from the receiving end (the receiving side received the data immediately sent confirmation), the total experience of the delay.

Larger bandwidth does not represent faster page loading speed! More Bandwidth doesn ' t Matter (much)

Relative bandwidth, RTT has a greater impact on Web page performance.

Conclusion:
Increasing the bandwidth will not have a significant impact on the speed of page loading. Instead, you should reduce the number of RTT, or RTT.

4 Improve Web Performance 4.0 How does HTTP work?

Hypertext Transfer Protocol (Http,hypertext Transfer Protocol) is one of the most widely used network protocols on the Internet. All WWW documents must comply with this standard. HTTP was originally designed to provide a way to publish and receive HTML pages.

1960 American Ted Nelson conceived a way to process text messages through a computer called hypertext (hypertext), which has become the foundation of the HTTP Hypertext Transfer Protocol Standard architecture.

The Ted Nelson organization coordinated the World Wide Web Association (Wide) and the Internet Engineering Working Group (Internet Engineering Task Force) to work together to study and eventually release a series of RFCs, of which the famous RFC 2616 defines the HTTP 1.1.

Short video Video: Basic Concepts of Web applications, how they work and the HTTP protocol

Different Types of HTTP requests
    • Get:this request is used to get the Response header and the Response body
    • Head:this request is used-get back the response header only (not the response body as returned by the GET request.)
    • Post:this request is used to submit data (eg:for to being used in HTML forms etc.) ...
    • Put:used for uploading resource
    • Patch:is used to modify the resource
    • delete:used for deleting resource
    • Trace:simply echoes back the "request sent by the client ... This can being used for testing the servers and Checking weather the server are alive or not.
Using Telnet to show HTTP requests
Telnet www.baidu.com80GET / HTTP/1.1TRACE / HTTP/1.1
4.1 http/0.9 (1991)

HTTP 0.9 as the first version of the HTTP protocol. is very weak. The request has only one line, such as:

GET www.cnblogs.com

From such a simple request body, there is no post method, no HTTP header can be seen, the HTTP client of that era can only receive one type: Plain text. And, if you don't get the information you're asking for, there are no 404 500 errors.

Although HTTP 0.9 looks so weak, it has been able to meet the needs of that era.

4.2 http/1.0 (1996)

http/0.9 is deeply unable to meet demand because of the proliferation of World Wide web demand. At this time, expanded a lot of functions http/1.0 out.

Changes in http/1.1 include:

    1. Introduction of the Post method
    2. The introduction of HTTP headers, status codes
    3. HTTP transmission content is not limited to text. Can be pictures, videos, documents, etc.
4.3 http/1.1 (1999)

The change in http/1.1 relative to 1.0 is not too big.

The main:
1. Added the Host header field.
2. The Range field is added to allow the client to download only part of the content when it is downloaded via HTTP, which makes multi-threaded downloads possible.

There is no specific and in-depth introduction to http/1.1 and previous versions. If you are interested or questioning, please Google yourself.

Released from 1.1 to Spdy, middle 11 years time. HTTP has not made any updates in such a long period of time.

We know that http/1.1 has a lot of problems with Web performance.

Like what:

    1. A larger HTTP Head, which consumes more network traffic.
    2. PlainText transmission, not secure.
    3. Non-persistent connections: A TCP connection is required for each request, which is time consuming.
    4. Persistent connection: A single TCP connection that can send multiple requests. However, multiple requests are sequential, and subsequent requests must wait for the previous request to return to send a response. This can easily cause subsequent requests to be blocked and also time consuming.

These problems are enough to lead to a lot of security issues, and poor Web page performance, which is a long page load time.

In order to solve these problems. A community or company that focuses on Web performance has come up with a lot of processing options and a number of technologies that are designed to improve Web performance.

Like what:
Yahoo 14 Web page Performance optimization principles. Reference: 14 rules for YaHoo! Web optimization

    1. Reduce the number of HTTP requests
    2. Using a CDN (Content Delivery Network, distribution networks)
    3. Add Expires Header (Web cache)
    4. Compress page Elements
    5. Put the style sheet on your head
    6. Put the script file at the bottom
    7. Avoid CSS expressions
    8. Put JavaScript and CSS in an external file
    9. Reduce the number of DNS queries
    10. minimizing JavaScript code
    11. Avoid redirects
    12. Delete duplicate script files
    13. Configure Etags
    14. Caching Ajax

Let's analyze what factors affect the performance of the Web page, which will extend the load time of the page.

A Web page, from the browser request, to the server response, return data or resources, to the browser to accept the complete. In fact, the macro involves three objects.

    • Browser: Number of requests affecting PLT
    • Routing networks: Bandwidth and RTT Impact PTL
    • Server: Server response time, database response time and other effects PTL

We are not talking about server response time for a while, because this is human-controlled and is generally the task of a background engineer.

Let's look at the effects of browsers and routing networks on PTL.

On the browser side, the number of requests depends on the structure and content of the page. If pictures, small icons, CSS files
, JS files, etc. too much, the number of requests naturally high. This greatly increases the PTL.

So, in the browser this object's perspective. We should minimize the number of requests, increase the number of concurrent connections, and avoid blocking loading as much as possible.

This results in the above 14 practices: 1, 3, 5, 6, 8, 12, 13, 14

From the perspective of the routing network. The impact on PTL is the two metrics we've analyzed above: bandwidth and RTT.

Therefore, in the middle of the transfer of resources, we can: increase the bandwidth, bandwidth must be reduced data volume, reduce RTT, these three ways to reduce PTL.

Thus, 14 practices were produced: 2, 4, 9, 10, 11

The above content is summed up again, the technology involved is:

    1. File stitching and compression
    2. Sprite Chart
    3. Inline Image
    4. Domain sharding

In order to fundamentally solve the above problems, instead of needing the front-end engineer to do a lot of performance optimization work, a great attempt appeared. That's spdy.

4.4 SPDY (2010)

Let's see what Spdy has done. Reference: SPDY Protocol Introduction

    1. Multiplexing
      Allows unlimited concurrent flows within a TCP connection (in the case where both resources are tolerable). Because the request is interleaved in a single channel, TCP can achieve high efficiency, thus less network connectivity is required and can be transmitted with very high data density.
    2. request with a priority level
      Although the problem of serialization is solved by infinite parallel data flow, they introduce another problem: if the bandwidth of the channel is limited, the client may block the requirement of blocking the channel. To overcome this problem, Spdy implements the request priority: the client can request as many items as possible and each request is assigned a priority. In this way, even if a high-priority request is still in the pending state, the channel does not transmit non-critical, low-priority requests, which effectively prevents transmission congestion.
    3. HTTP Header Compression
      For HTTP requests, the response header, Spdy are compressed, so the packet is smaller, for the restful type of WEB2.0, or OpenAPI business, there will be considerable efficiency gains.
    4. server-side push
      Spdy through the X-associated-content protocol header to push the data to the client, the header notifies the client, I want to push the resources to you, ready to receive the good. The latest popular Google +, if you use Chrome, the default is to use Spdy technology, and turn on the server push technology. Server Push Technology, a comprehensive upgrade of the user experience, is g+ products quickly occupy enough advantages, recently Facebook developed its own browser, also have to get rid of the current technical limitations of the consideration
    5. Server Hint
      Unlike the push technology mentioned above, the server will tell the browser first, you can download the ABC resources, this ABC resource, may be the next page of JS, CSS, or content. The server is not actively pushing and still waits for client requests, which is a great optimization for low-speed networks, a bit like our preload technology.
4.5 HTTP/2 (2015)

HTTP/2 based on Spdy, better than Spdy. Reference: Analysis of new features of HTTP/2

Different points:

    1. HTTP/2 supports plaintext HTTP transmission, while SPDY enforces HTTPS
    2. The compression algorithm of the HTTP/2 message header uses HPACK rather than the SPDY deleft

HTTP2 Advantages:

    1. HTTP/2 uses binary format to transfer data, not http/1.x text format. Binary format brings more advantages and possibilities in protocol parsing and optimization expansion.
    2. HTTP/2 uses HPACK to compress the message header, which can save the traffic of the network that the message header occupies. and http/1.x each request, will carry a large number of redundant head information, wasting a lot of bandwidth resources. Header compression is a good solution to this problem.
    3. Multiplexing, to put it bluntly, is that all requests are done concurrently through a TCP connection. http/1.x Although a connection can be used to complete multiple requests, there is a sequential order between multiple requests, and subsequent requests must wait for the previous request to return to send a response. This can easily cause subsequent requests to be blocked, while HTTP/2 does a real concurrent request.
    4. At the same time, the stream also supports priority and flow control.
    5. Server push: The service side can push resources to the client faster. For example, the server can actively push JS and CSS files to the client, without the need for the client to parse HTML and send these requests. When the client needs it, it is already on the client.
5 Embracing HTTP/2 using HTTP/2

The first step: use SSL\TLS to encrypt your HTTP connection, that is, using HTTPS

Step Two: Configure a server that supports HTTP/2

Step three: Check browser compatibility

Nodejs width HTTP/2

With PACKAGE:HTTP2, you can deploy Nodejs HTTP2 services.

Code:

Web performance in the HTTP/2 era

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.