High-performance Browser network (performance Browser Networking) Chapter I.

Source: Internet
Author: User
Tags akamai technologies

 

Translator Note: This article is "High performance Browser Networking" translation version, using Google Translator Tookit tool translation, the original text in many formats lost, follow-up needs to be unified.

Please refer to the original text: http://chimera.labs.oreilly.com/books/1230000000545/ch01.html

Chapter One the basic conceptual speed of latency and bandwidth is a feature

Over the past few years, Web performance optimization (WPO) has grown rapidly as a new industry, a clear sign of users pursuing higher speeds and faster user experiences. Web performance optimization is not simply an emotional need for a fast-connected world, but also a result driven by many key business requirements:

    • Faster sites allow better users to participate
    • Faster websites can improve user retention
    • Faster website conversion rate for website promotion

To put it simply, speed is an important feature. To implement this feature, we need to understand the many factors and basic limitations that affect Web performance. In this chapter, we will focus on two key parts that determine the performance of all network traffic: latency and bandwidth ( figure 1-1).

Delay

Time from source to destination for a packet

Bandwidth

Logical OR physical channel capacity

Figure 1-1 Delay and bandwidth

To better understand how bandwidth and latency work together, we will provide tools to delve deeper into tcp,udp, as well as the internal details and performance characteristics of upper-level application protocols.

Low-latency trans-Atlantic high-speed cable engineering

In financial markets, latency is an important factor in many high-frequency trading algorithms, and a millisecond delay can lead to millions of losses.

At the beginning of 2011, Huawei and the Hybernia Atlantic Network (Hibernia Atlantic) began laying out a new 3000-mile fibre network ("Hybernia Express"), a high-speed cable across the Atlantic connecting London to New York, with the sole goal of contrasting all other existing transatlantic lines, Can save 5 milliseconds of delay.

Once put into use, the cable will be used only by financial institutions, which would cost more than $400 million and 1 milliseconds at a cost of $80 million. This shows that the cost of delay is expensive.

Factors that affect latency

A delay is the time it takes for a message or a packet to arrive at its destination from its starting point. This definition is simple, but it often hides a lot of useful information: Each system consists of multiple components, multiple factors, all of which together constitute the time of message delivery, the understanding of which components are part of the system, and the performance determinants of each component, which is critical to our understanding of the overall system latency.

Let's take a closer look at the various latency factors that are common in typical routes in the Internet, and how these factors affect the delivery of a message from the client to the server.

propagation Delay

A message in the time required from the sender to the receiver, propagation delay refers to the electromagnetic signal or optical signal transmission in the transmission medium delay, in the optical fiber or copper wire, the optical signal and electromagnetic signal propagation speed of more than 200,000 km/s, in the transmission medium transmission is electromagnetic signal or optical signal, rather than data!

Transmission delay

The time that is required to transmit all packets in the data link, and the transmission delay is related to the length of the packet and the transmission speed of the link.

Processing delay

The time it takes to process the header, validate the data, and find the destination address of the packet.

Queuing delay

Packet wait time in queue

The latency between the client and the server is the sum of all the preceding delays. The propagation time depends on the distance and the transmission medium of the signal-in the transmission medium of fiber, twisted pair, cable and so on, the propagation rate is fixed, regardless of the size of the data sent! The transmission delay depends on the data rate of the available transport link, regardless of the distance between the client and the server. For example, let's assume that we have two different links that transmit one MB of files: 1 Mbps and a link on a 1 Mbps link, we need 10s to put all the files on the entire link, and we only need 0.1s on the link on the three Mbps.

Next, as soon as the packet arrives at the router, the router must check the packet header to determine the route to the next hop, and possibly other checks and processing of the data, which will take time. Most of these processing is performed in hardware and the latency is very small, but they do exist. Finally, like?? If the packet arrives faster than the router, the packet also needs to wait in the packet queue of the router, which is called the queue delay.

These latencies are encountered for each packet that is transmitted over the network. When the source address is farther away from the destination address, the propagation delay is greater, and the more intermediate routes the packet passes, the more processing time each packet requires, and the longer the packet will stay in the router's wait buffer while the routing is experiencing more busy.

Router over-caching issues (bufferbloat)

Bufferbloat is a term created and popularized by Jim Gettys in 2010, mainly describing a good example of how queuing delays in routers affect overall network performance.

The reason for this problem is that many routers now create large receive buffers to avoid losing packets, but this behavior destroys TCP's congestion avoidance mechanism (which we'll cover in the next chapter) and introduces varying high network latencies.

The good news is that the Codel active Queue management algorithm has been proposed to solve this problem and has now been implemented in Linux 3.5 + kernel. To learn more, see the Control Queue delay ACM queue.

Speed of light and propagation delay

As Einstein described in his special theory of relativity, the speed of light is the greatest theoretical velocity at which all energy, matter and information can be transmitted. This conclusion sets the hard leverage that cannot be exceeded for any network's propagation time.

The good news is that the speed of light is very high: 299,792,458 meters per second, or every second?? 186282 miles. However, there is always one but this is the speed of light in a vacuum. In practice, our packets are always transmitted through some kind of medium, such as copper wire, or fiber optic cable, which slows down the signal transmission speed ( table 1-1). The speed of light in the medium is called the refractive index, the higher the refractive index, the slower the light travels in the medium.

Typical fiber refractive index values generally vary from 1.4 to 1.6, of course, improving the quality of fiber materials, can reduce the refractive index. To simplify, the rule of thumb is to assume that the speed of light in the fiber is 200,000,000 meters per second, and its corresponding refractive index is about 1.5. Thankfully, we have achieved such a high speed. Thanks to those great optical communications engineers.

Table 1-1: Signal delay in vacuum and fiber optics

Routing

Distance

Vacuum propagation Time

Fiber propagation Time

Transmitted round trip Time (RTT) before light

NEW YORK---San Francisco

4148km

14ms

21ms

42ms

New York---London

5,585 km

19ms

28ms

56ms

NEW YORK---Sydney

15993km

53ms

80ms

160ms

Equatorial Perimeter

40075km

133.7ms

200ms

200ms

The speed of light is fast, but a roundtrip (RTT) from New York to Sydney still needs 160ms. In fact, the figures in table 1-1 are still optimistic, because they think that the packet travels through the fiber-optic cable along the great Circle Line (the shortest distance between two points on the Earth), the Translator notes: The Circle Line is an arc with a radius equal to the sphere radius on the sphere. The great Circle line is the curve that connects the shortest two points on the spherical surface. The great Circle Line is the arc with the largest radius on the spherical surface, and all the lines are large circles, and the latitude is only the equator, which is transmitted between cities. In practice, such cables do not exist, and packets are transmitted between New York and Sydney over longer routes. Each hop along this route will bring additional routing, processing, queuing, and transmission delays. In our existing actual network, the actual RTT between New York and Sydney is basically between 200-300ms. Considering all the factors affecting, it still seems pretty fast, right?

We are not used to measuring the delay in our daily lives in milliseconds, but research shows that most of us will focus on the latency of more than 100-200ms in a system. Once the latency threshold of 300 milliseconds is exceeded, the interaction is often defined as "weak", when more than 1000 milliseconds (1 seconds) have passed, and many users have had a psychological context switch in the process of waiting for a response---may start daydreaming, perhaps thinking of the next urgent task.

The conclusion is simple: in order to provide the best experience to keep the user focused on the current task, we need to control our application response time within hundreds of milliseconds. This leaves us with little room to make mistakes on the web. To achieve this goal, network latency needs to be focused and there is a clear design standard at all stages of application development.

Content Delivery Network (CDN) services offer many conveniences, the most important of which is the simple analysis that distributes content around the world and provides content to clients from the nearest location, which significantly reduces the time it takes to propagate packets.

We may not be able to get the packets to propagate faster, but we can put the content in a strategic way to reduce the distance from the user closer! With CDN, we can deliver significant performance gains for data dissemination.

The "last mile" delay

Ironic?? Is that the reality is often not across the oceans or continents, but the last few miles bring more delays: This is the infamous "last mile" problem. In order to connect your home or office to the Internet, your local ISP will need to assemble the routes in the neighboring neighborhood, merge the signals, and forward them to the local routing node. In reality, depending on the type of connection, routing method, deployment technology, it may take dozens of MS, and this is just to access the router to your ISP! According to the U.S. Federal Communications Commission's "Broadband US Survey" report in early 2013, during peak hours:

Fiber-to-household, the average latency is best, the peak period averaged 18 milliseconds, the cable cable delay is 26 milliseconds, and the DSL has a 44 millisecond delay.

-FCC-2013 Year February

Before the packet is sent to the destination, the closest node we send to the ISP's network is already 18-44ms delayed. The FCC report is based primarily on the United States, but the "last mile" delay is a challenge for all ISPs, regardless of geography. If you're curious, a simple traceroute can often tell you about the topology and performance of your Internet service provider.

$> traceroute google.com
Traceroute to Google.com (74.125.224.102), hops max, byte packets
1 10.1.10.1 (10.1.10.1) 7.120 ms 8.925 ms 1.199 ms
2 96.157.100.1 (96.157.100.1) 20.894 ms 32.138 ms 28.928 ms
3 x.santaclara.xxxx.com (68.85.191.29) 9.953 ms 11.359 ms 9.686 ms
4 x.oakland.xxx.com (68.86.143.98) 24.013 ms 21.423 ms 19.594 ms
5 68.86.91.205 (68.86.91.205) 16.578 ms 71.938 ms 36.496 ms
6 x.sanjose.ca.xxx.com (68.86.85.78) 17.135 ms 17.978 ms 22.870 ms
7 x.529bryant.xxx.com (68.86.87.142) 25.568 ms 22.865 ms 23.392 ms
8 66.208.228.226 (66.208.228.226) 40.582 ms 16.058 ms 15.629 ms
9 72.14.232.136 (72.14.232.136) 20.149 ms 20.210 ms 18.020 ms
64.233.174.109 (64.233.174.109) 63.946 ms 18.995 ms 18.150 ms
One x.1e100.net (74.125.224.102) 18.467 ms 17.839 ms 17.958 ms

First Hop: Local wireless routing

11 Jump: Google Server

In the example above the packet starts in Sunnyvale city of Santa Clara, then Oakland, back to San Jose, routed to the "529 Bryant" data center, where it was routed to Google Server, and the 11th hop reached its destination. This whole process, on average, takes a time of Ms. All things considered, the situation is not bad, but at the same time the packet also basically across the entire continental United States.

The last mile delay is affected by your ISP situation, the deployment technology, the topology of the network, or even the change of day at different times. As an end user, if you want to improve your web browsing speed and reduce latency, choosing an ISP with minimal local latency is an important way to optimize.

Network latency, not bandwidth, is a performance bottleneck for most Web sites! To understand why, we need to understand the TCP and HTTP protocols-we'll cover this topic later in the chapter. However, if you are curious, you can jump directly to:"Higher bandwidth is not the key."

Measurement delay with traceroute

The Traceroute program is a simple network diagnostic tool used to record the routing path of a packet and the latency of each hop in the route. In order to identify the way out by each hop, it sends a sequence of packets to the destination address with an incremented TTL. When the TTL reaches the upper limit, the intermediate route returns an ICMP timed out message, which is used by the Traceroute tool to measure the latency of each network hop.

On UNIX platforms, the command line name is Traceroute, and on Windows, the command line name is tracert.

Core network bandwidth

The optical fiber, as a simple "light tube", is slightly thicker than the human hair, used to transmit the optical signal between the ends of the cable. Metal wires can also be used for transmission, but the wire has higher signal loss, electromagnetic interference, and higher maintenance costs. The packet may pass through both media during transmission, but for any long distance transmission, the fiber is unavoidable.

Optical fiber has obvious bandwidth advantages, because each fiber can carry many different wavelengths of light (channel), which is wavelength division multiplexing (WDM) technology. Therefore, the total bandwidth of the fiber link is the product of the data transfer rate and the number of multiplexed channels for each channel.

By the beginning of 2010, researchers have developed more than 400 wavelength multiplexing techniques, each with a bandwidth of 171 gbit/s, and a total bandwidth of up to tbit/seconds per fibre link! We need thousands of copper wires (electricity) to reach this throughput. It is not surprising that long-distance transmissions, such as sea-floor data transfers between continents, are carried out through fiber-optic links. Each cable contains multiple strands of fiber (4 strands is a common number), which is converted to a bandwidth capacity of hundreds of T bit/s per cable.

Access network bandwidth

The backbone, or fiber link, is the core data route that makes up the Internet, capable of transmitting hundreds of tbit/s of data. However, the available capacity of the access network is much less and varies greatly with different access technologies: dial-up, DSL, wired cable, host-based wireless technology, fiber-to-home, local router performance. The user's available bandwidth is among the lowest in the transport link between the client and the target server ( figure 1-1).

Akamai Technologies operates a global CDN, deploys servers around the world and provides free quarterly reports, and table 1-2 shows the bandwidth trends for the first quarter of 2013.

Table 1-2 average bandwidth for the first quarter of 2013 displayed on Akamai's servers

Ranking

Country/Region

The average Mbps

Changes over the year

-

Global average

3.1

17%

1

Korea

14.2

-10%

2

Japan

11.7

−3.6%

3

Hong kong

10.9

16%

4

Switzerland

10.1

24%

5

Netherlands

9.9

12%

...

9

United States

8.6

27%

The above data does not include traffic from mobile operators, and we will discuss the details of this topic in a later section. At the moment, the mobile bandwidth varies greatly and is generally slower. However, even without considering the mobile bandwidth, the global broadband average bandwidth was only 3.1 at the beginning of 2013 mbps! South Korea's 14.2 Mbps has the world's leading bandwidth, and the US ranks nineth in 8.6 Mbps.

To get a clearer picture of this, consider a high-definition video stream that typically requires 2-10 Mbps bandwidth, which relies on video resolution and encoding format. Therefore, the current average bandwidth, ordinary users can be used to play low-resolution video streaming, which has taken up most of the user's bandwidth, if a family has multiple users want to watch video at the same time, is basically an unreliable story.

Figuring out where a user's bandwidth bottleneck is is an easy but important task. There are many online services, such as the speedtest.net provided by Ookla ( figure 1-2), which provides testing services for the upstream and downstream rates of some local servers-we will discuss in the TCP section why it is so important to pick a good local ISP. Run a test on these services to check if your connection speed matches the speed advertised by your local ISP.

Figure 1-2 Uplink and downlink Speed test (speedtest.net)

While it is necessary to choose an ISP with a high-bandwidth link, there is no guarantee of stable end-to-end performance. The network may be at a certain point in time due to high loads, hardware failures, centralized network attacks, or other host causes in the middle of any one node congestion. The change in high throughput and latency performance is an intrinsic attribute of our data network-predicting, managing and adapting to changing "network weather" is a complex task.

Higher bandwidth and lower latency

Our demand for higher bandwidth is growing fast, largely due to the increasing demand for video streaming, which currently accounts for half the bandwidth of the entire Internet. The good news is that while it may not be cheap, there are a variety of technologies that provide us with increased usable capacity: we can add more fiber to our fiber link, we can deploy more links on congested routes, and we can improve WDM technology so that it transmits more data in existing links.

TeleGeography, a telecoms market research and consulting firm, estimates that as of 2011 we used only 20% of the usable capacity of the deployed submarine fiber link. More importantly, between 2007 and 2011, more than half of all new trans-Pacific cables are due to WDM technology upgrades: More data can be transferred at both ends of the same fiber link. Of course, we cannot expect these increases to go indefinitely, as each medium will reach its diminishing point of return. Nonetheless, as long as the economy allows, there is no reason not to increase bandwidth-even if new technologies fail, we can deploy more fiber-optic links.

Reducing the delay is another different story. We can get a little closer to the speed of light by developing lower refractive index materials and faster routers. However, our current optical fiber propagation speed is about 1.5 of the refractive index, which means that theoretically we raise up to 30%. Unfortunately, there is no way to solve the laws of physics-the hard limit of the minimum delay of the speed of light.

Therefore, we cannot let the light signal be transmitted faster, but we can make the short distance-the shortest distance between any two points on Earth is defined as the great circle path between them. However, the new cable can not be laid out in full circle path, which is related to topography, social and political factors, but also by the cost constraints.

Therefore, in order to improve our application performance, we need to design and optimize our protocols based on limitations of existing bandwidth and optical transmission rates: Reduce transport routing nodes, deploy data closer to the client, and hide network latency through caching, prefetching, and other technologies in the application.

High-performance Browser network (performance Browser Networking) Chapter I.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.