Summary of Ethernet storage forwarding mechanism

Source: Internet
Author: User


When the Ethernet device receives the packet, it first saves it to the buffer zone and then forwards it out. An Ethernet device is like a cargo transfer station, and a packet is like a packed cargo.

How does cargo transfer station work? Assume that the transfer station has four doors, each of which can be used as the entrance and exit. The purchase speed and delivery speed of all ports are the same. In the middle of the transfer station is a public goods delivery area. Goods that cannot be transferred can be temporarily stored in the goods delivery area. When the public goods area is full, the goods will be discarded when they come in.


Basic storage forwarding

First, let's look at the simplest situation. The goods entering from door A must go out from door B, and the goods entering from door C must go out from door D.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/041F2Mb-0.png "title =" 001-basic-forwarding.png "alt =" 170537457.png"/>


The cargo enters from door A and will soon be transferred to door B. The goods that door C enters will soon be forwarded from door D. In this case, the public cargo area will not hoard the goods.

This is the simplest case for Ethernet forwarding.


The role of the buffer

Now, both door A and door B are purchased at full speed at the same time, and these goods must be transferred from door C.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/041F2O61-1.png "title =" 002-buffer.png "alt =" 170501708.png"/>


C-door shipment speed is not fast enough. The goods entered by A and B cannot be transferred out completely, and some goods will be saved in the public warehouse. If door A and door B keep purchasing goods at full speed, the public goods delivery area will soon be insufficient. Fortunately, port A and port B are not always purchased at full speed. In most cases, the cargo area is not fully occupied. Port C can methodically forward the accumulated goods out.

The Ethernet buffer zone is similar to the cargo transfer station's cargo zone. It can buffer Ethernet frames. The larger the buffer, the stronger the buffer capacity. However, a larger buffer represents a higher cost. Therefore, we usually make a trade-off between the forwarding effect and the buffer size to obtain a reasonable buffer size.


Port Buffer

Now, let's look at the more complex situations. Door A and door B deliver goods at full speed to door C at the same time, and door D will transfer the goods to door B.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/041F2E95-2.png "title =" 003-before-port-buffer.png "alt =" 170650182.png"/>


Warehouse A and warehouse B are purchased at full speed at the same time, and the public warehouse area is not enough. Warehouse D is also in stock now, and the public buffer zone is obviously not enough. In fact, it is not difficult to find out that door B is not busy, and the goods can be forwarded from door D. Now, insufficient space in the goods delivery area has become the bottleneck of goods transfer.


The cargo transfer station administrator thought of a way to open up a cargo delivery area at the door of each door, and can only store the goods sent to this door. The goods are saved in the warehouse area at the door first. The warehouse area at the door is not enough to occupy the public warehouse area. In this way, the goods that enter from door D can be smoothly transferred from door B.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/041F24103-3.png "title =" 004-port-buffer.png "alt =" 170813568.png"/>


Because the warehouse receiving area at the door occupies part of the space, the space in the public warehouse receiving area is reduced. This adjustment is worthwhile for improving the transit efficiency. The buffer space allocation of the Ethernet switch also adopts a similar mechanism.


Outbound port queue

In some cases, some goods need to be urgently forwarded, which means the administrator must grade the goods. According to the different characteristics of the goods, the Administrator divides all the goods into eight levels, and each level of the goods are arranged in eight queues for sending at the exit. The Administrator will consider sending the goods with higher levels of priority.

If higher-level goods are always given priority, and higher-level goods are continuously sent, the lower-level goods may not be forwarded. In order to solve this problem, the Administrator adjusted the forwarding method. Higher-Level goods are sent more than lower-level goods are sent less. In this way, low-level goods also have the opportunity to be forwarded.

This is the basic principle of the Ethernet device's out-of-port queue. How to coordinate the sending of packets with different priorities on the outbound port is a "queue scheduling" problem.


Traffic Control

In the "port buffer" section, we mentioned that door A and door B are always shipped to door C at full speed, and the space in the warehouse area will never meet the requirements.

Now, the transfer station administrator wants to solve this problem. All goods are first registered at the entrance, and then the corresponding records are canceled after being forwarded, so that the administrator can further understand the transit situation of the goods. If too much cargo is hoarded from a door, the Administrator will tell the delivery person at the door to stop delivering the goods for a period of time. After stopping for a period of time, if there are still too many hood goods, the Administrator will tell the deliveryman to stop for another period of time. The deliverer can continue to deliver the goods until the volume of the goods entered at the entrance is reduced to a certain level.

This method is the basic principle of Ethernet traffic control. The Ethernet port sends PAUSE frames to the peer end to suspend sending packets. The target MAC address of the PAUSE frame is 0180-C200-0001. Note that if the peer end does not enable throttling, the PAUSE frame will be ignored and the message will be sent again. In this way, the incoming packets from the local port accumulate more and then the packets are discarded.

This is not the case for half-duplex links. In half-duplex mode, the PAUSE frames sent locally may conflict with the packets sent from the peer end. The peer end cannot see the PAUSE frames, and the traffic control mechanism cannot work.

Here, we use the term "throttling" because it is familiar to everyone. To be exact, the traffic control is only the Ingress BackpressureIBP in full duplex mode. The half duplex mode uses different mechanisms. In half duplex mode, the local port sends a Jamming signal to the outside.

The traffic control mentioned here is based on the port. Once the traffic control mechanism takes effect, the traffic on the entire port will be interrupted for a short period of time. Priority-based traffic control can trigger the traffic control mechanism based on the priority of the message, so that only traffic with a specific priority is affected, and other traffic can be forwarded normally.


Prevention of line head Blocking

The term "Line header Blocking" seems to be irrelevant to Ethernet. The term "Head-Of-Line Blocking" is short for "HOL Blocking ".


In order to simplify the Problem description, the following is a simple example. The B portal of the transfer station purchases goods at full speed, and the goods must be forwarded from the C portal. At the same time, A portal also purchases goods at full speed, half of which are sent to the C portal, and the other half are sent to the D portal.


650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/041F21293-4.png "title =" 005-hol-blow.prevention.png "alt =" 170952406.png"/>


The shipment speed of door C is smaller than the sum of the purchase speed of door A and door B. Door C will soon be congested, and the warehouse receiving area will soon be filled up. At this time, outside door A, the goods are still waiting in line to enter the transfer station, but the goods sent to door C and door D cannot enter. This is like on a one-lane road, a vehicle needs to turn right, but there are other vehicles waiting for direct travel in front of the car. In this case, the line header is blocked ".

How can this problem be solved? The Administrator thought of a solution to count the goods at the exit. If there are too many hoargs at an exit, the Administrator will notify all portals to enter the HOL blocking prevention state, and then the entry will automatically discard the goods sent to that exit. In this way, other goods at the entrance will have the opportunity to enter the transfer station. When the HOL blocking at the exit falls to a certain level, the exit will then notify the entrance to release the "HOL blocking prevention" status.

The above is the working mechanism of "Preventing line head congestion.


Message Cache Time

The above analysis of the storage and forwarding over Ethernet ignores a problem-how long can a packet be cached in the buffer? Under normal circumstances, the latency introduced by message Forwarding is very short, and the latency in microseconds is not too short. The cache time of packets in the buffer zone can be much longer than that of seconds. The problem is that if the cache time is too long, the message will lose its meaning. Latency requirements for voice and video services are relatively high, which is particularly evident. From another perspective, long-time cached packets also occupy valuable buffer space and affect other services.


This article is from the "Network work room" blog. For more information, please contact the author!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.