How to solve the problem of Ethernet switch parameters

Source: Internet
Author: User


How to solve the problem of Ethernet switch parameters the switch can be divided into Layer 2 switches, Layer 3 switches, and Layer 4 switches according to the working protocol layer, you can divide network-managed switches and non-managed switches based on whether the network management function is supported. This article will describe how to set parameters for the Ethernet switch. Packet forwarding rate indicates the size of the packet forwarding capability of the switch. The Unit is generally pps (packets per second). Generally, the packet forwarding rate of a switch ranges from dozens of Kpps to hundreds of Mpps. Packet forwarding rate refers to the number of million packets (Mpps) that the switch can forward per second, that is, the number of data packets that the switch can forward simultaneously. The packet forwarding rate of www.2cto.com is measured in packets. In fact, an important indicator that determines the packet forwarding rate is the vswitch's backboard bandwidth, which indicates the total data exchange capability of the vswitch. The higher the bandwidth of the backboard of A vswitch, the stronger the ability to process data, that is, the higher the packet forwarding rate.
The bandwidth of A vswitch is the maximum amount of data that can be transferred between the vswitch interface processor, interface card, and data bus. The backboard bandwidth indicates the total data exchange capability of a vswitch. The unit is Gbps. It is also called the switching bandwidth. The bandwidth of A vswitch ranges from several Gbps to hundreds of Gbps. The higher the bandwidth of A vswitch, the stronger the ability to process data, but the higher the design cost. However, how can we check whether the bandwidth of A vswitch is sufficient? In general, we should consider two aspects: 1) the sum of the port capacity multiplied by the number of ports should be 2 times smaller than the backboard bandwidth, so that full-duplex non-blocking switching can be realized, it proves that the vswitch has the maximum data exchange performance. 2) Full configuration throughput (Mpps) = number of full configuration ports × 1. 488 Mpps. The theoretical throughput of One gigabit port when the packet length is 64 bytes is 1.488 Mpps. For example, A vswitch that can provide up to 64 Gigabit ports must have a full configuration throughput of 64 × 1.488 Mpps = 95.2 Mpps to ensure that all ports are working at the same speed, provides non-blocking packet switching. If a vswitch can provide a maximum of 176 Gigabit ports and the declared throughput is less than 261.8 Mpps (176 × 1.488 Mpps = 261.8 Mpps ), the user has reason to think that the switch adopts a blocking structure design. Generally, the switches that both meet the requirements are qualified switches. The utilization rate of the backboard Bandwidth Resources is closely related to the internal structure of the vswitch. Currently, the internal structure of a vswitch mainly includes the following types: first, the shared memory structure, which relies on the central switching engine to provide high-performance connections across all ports, the core engine checks each input packet to determine the route. This method requires a lot of memory bandwidth and high management costs. Especially with the increase of switch ports, the price of the central memory will be very high, so the switch kernel becomes the bottleneck of performance implementation; the second is the cross-bus structure, it can establish direct point-to-point connections between ports, which is good for single-point transmission performance, but not suitable for multi-point transmission. The third is the hybrid cross-bus structure, which is a hybrid cross-bus implementation method, it is designed to divide an integrated cross-Bus Matrix into small cross matrices, and connect them through a high-performance bus. The advantage is that the number of Cross buses is reduced, the cost is reduced, and the bus contention is reduced. However, the bus connected to the cross matrix becomes a new performance bottleneck. Www.2cto.com 2. Ethernet switch stack is a dedicated connection cable provided by the manufacturer. The "UP" Stack port of one switch is directly connected to the "DOWN" Stack port of another switch. To expand the number of ports of a single vswitch. Generally, switches can stack 4 ~ 9. To meet the port quantity requirements of large networks, switches are usually stacked in larger networks. Note that only stackable switches have such ports. The so-called stackable switch means that a switch generally has both "UP" and "DOWN" stacked ports. When multiple switches are connected together, they act like a modular switch. A stacked switch can be managed as a unit device. In general, when there are multiple switches stacked, one of them can be managed switch. You can use the managed switch to manage other "independent switches" in the stacked switch. Stacked switches can easily expand the network and are the most ideal choice for creating a new network. All switches in the stack can be managed as a whole switch. That is to say, all switches in the stack can be regarded as a switch in the topology. A stack vswitch can be managed as a switch in a unified manner. The switch stack technology uses a dedicated management module and stack connection cable. The advantage of this is that, on the one hand, the user port is added, and a wide broadband link can be established between switches, in this way, the bandwidth of each actually used user may be wider (only when not all ports are in use ). On the other hand, multiple switches can be used as a large switch to facilitate unified management. 3. Exchange mode currently, the switch usually uses three packet exchange modes: pass-through, storage-forward, and fragment isolation when transmitting data packets from the source and destination ports. Currently, the storage-based forwarding mode is the mainstream switch switching mode. (1) Cut-through (Cut-through) an Ethernet switch that uses a straight-through switching mode can be understood as a line matrix telephone switch that is crossly And crossly between ports. When a packet is detected on the input port, it checks the packet header, obtains the destination address of the packet, and starts the internal dynamic search table to convert it to the corresponding output port, connect data packets at the intersection of input and output to the corresponding port to implement the switching function. Because it only checks the packet header (usually only 14 bytes) and does not need to be stored, the cut-in method has the advantage of low latency and fast switching speed. Latency refers to the time it takes for a packet to enter a network device and leave the device. It has three main disadvantages: first, because the data packet content is not saved by the Ethernet switch, it is impossible to check whether the transmitted data packet is incorrect and the error detection capability is not provided; second, because there is no cache, the input/output ports with different rates cannot be directly connected, and packet loss is easy. If you want to connect to a high-speed network, such as providing a Fast Ethernet (100BASE-T), FDDI, or ATM connection, you cannot simply "Connect" the input/output ports ", because there is a speed difference between the input and output ports, the cache must be provided. Third, when the port number of the Ethernet switch increases, the switching matrix becomes more complex and becomes more difficult to implement. (2) Store-and-Forward is one of the most widely used technologies in the computer network field, the Controller of the Ethernet switch first caches the incoming packets from the input port, first checks whether the packets are correct, and filters out conflicting packet errors. Www.2cto.com after confirming that the package is correct, retrieve the destination address, find the output port address to be sent through the search table, and then send the package out. Because of this, the storage and forwarding method is insufficient when the data processing latency is large, but it can detect errors of data packets entering the switch, it also supports switching between input/output ports at different speeds, which can effectively improve network performance. Another advantage of this switching mode is that it supports switching between ports at different speeds to ensure collaboration between high-speed ports and low-speed ports. The solution is to store 10 Mbps low-speed packages and then forward them to the port at Mbps. (3) Fragment Free is a solution between the pass-through and storage-based forwarding. Before forwarding, it checks whether the packet length is 64 bytes (512 bits). If the packet length is smaller than 64 bytes, it indicates a false packet (or a residual frame) and the packet is discarded; if the value is greater than 64 bytes, the packet is sent. This method processes data faster than the storage and forwarding method, but is slower than the pass-through method. However, it is widely used in low-end switches because it can avoid Frame Forwarding. Switches using this type of switching technology generally use a special cache. This cache is a kind of First In First Out (First In First Out), where bits enter from one end and then come Out from the other end In the same order. When a frame is received, it is saved in FIFO. If a frame ends with a length less than 512 bits, the content in the FIFO (residual frame) will be discarded. Therefore, there is no Frame Forwarding problem in a common direct-through forwarding switch, which is a very good solution. Data packets are cached and saved before forwarding to ensure that collision fragments are not transmitted over the network, which can greatly improve the network transmission efficiency. This article is from the fat shark network.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.