The following article is an effective description of the ethernet switch parameter configuration problem, the switch according to the work protocol layer can be divided into a second layer of switches, third-tier switches and layer fourth switch, according to whether or not to support the network management function can be divided into network-oriented switch and network-managed switch.
Packet forwarding rates flag the size of the switch forwarding packet capability. Unit General bit PPS (packet per second), the general switch packet forwarding rate of dozens of kpps to hundreds of MPPs range. Packet forwarding rate is the number of packets (MPPs) that the switch can forward per second, that is, how many packets the switch can forward at the same time.
Packet forwarding rate reflects the exchange capability of the switch in the packet unit. In fact, a key indicator of packet forwarding rate is the backplane bandwidth of the switch, and the bandwidth of the backplane has marked the total data exchange capability of the switch. The higher the bandwidth of the backplane of a switch, the stronger the ability to process data, that is, the higher the packet forwarding rate.
The backplane bandwidth of the switch backplane bandwidth is the maximum amount of data that can be throughput between the switch interface processor or the interface card and the bus board. Backplane bandwidth marks the total data exchange capability of the switch, in units of Gbps, also known as switching bandwidth, and the backplane bandwidth of the general switch ranges from several Gbps to hundreds of Gbps. The higher the bandwidth of a switch's backplane, the stronger the ability to process data, but also the higher the cost of design.
But how do we see if the backplane bandwidth of a switch is adequate? Generally speaking, we should consider two aspects:
1 All port capacity multiplied by the number of ports twice times should be less than the backplane bandwidth, so that the full duplex non-blocking exchange, it is proved that the switch has the maximum data exchange performance conditions.
2 Full configuration throughput (MPPs) = Full configuration port number X1.488mpps, where 1 gigabit ports have a theoretical throughput of 1.488Mpps when the packet length is 64 bytes. For example, a switch that can provide up to 64 gigabit ports with a full configuration throughput of 64x1.488mpps=95.2mpps to ensure that non-blocking packet swapping is provided when all the port EMA is working.
If a switch can provide up to 176 gigabit ports and the claimed throughput is less than 261.8Mpps (176x1.488mpps=261.8mpps), then the user has reason to think that the switch is designed with a blocking structure. A switch that is generally satisfied is a qualified switch. The utilization of back-panel bandwidth resources is closely related to the internal structure of the switch.
At present, the internal structure of the switch is mainly as follows: One is the shared memory structure, which relies on the central switching engine to provide a full port high-performance connection, and the core engine examines each input packet to determine the route. This method requires a large memory bandwidth, high management costs.
Especially with the increase of switch port, the price of central memory will be very high, so the switch kernel becomes the bottleneck of performance implementation; the other is the crossover bus structure, which can establish direct point-to-point connection between ports, which is good for single point transmission, but not suitable for multicast transmission;
Three is a hybrid crossover bus structure, which is a hybrid crossover bus implementation, its design idea is to divide the whole crossover bus matrix into a small cross matrix, in the middle through a high-performance bus connection. The utility model has the advantages of reducing the crossover bus number, reducing the cost and reducing the bus contention, but connecting the bus with cross matrix becomes the new performance bottleneck.
2, Ethernet switch tiered switch stacking is a dedicated connection cable from the manufacturer, connected directly to the "down" stacking port of another switch from the "Up" stacking port of one switch. To achieve the expansion of a single switch port number. A general switch can stack 4~9 units.
In order for the switch to meet the number of ports on a large network, the switch stack is generally used in larger networks. It is important to note that only stackable switches have this type of port, and the so-called stackable switch refers to a switch that typically has both "up" and "down" stacking ports.
When multiple switches are connected together, they act like a modular switch, and the stack of switches can be managed as a unit device. In general, when multiple switch stacks exist, there is a manageable switch that can manage other "standalone switches" in this tiered switch with a manageable switch. Scalable switches make it easy to extend the network and are ideal for new networks.
All switches in the stack can be managed as a whole switch, that is, all switches in the stack are considered as a switch from the topology. A switch that stacks together can be managed as a single switch.
The switch stacking technology uses a dedicated management module and a stack connection cable, the advantage of this is that with the addition of a user port, a wider broadband link can be established between the switches, so that each actual user bandwidth is likely to be wider (only if not all ports are in use). On the other hand, multiple switches can be used as a large switch to facilitate unified management.
3, the exchange of the current switch in the transmission of the source and destination port packets are usually a straight-through exchange, storage and forwarding and fragmentation isolation mode of three kinds of packet exchange. Current storage and forwarding is the main switching mode of switch.
(1), through-pass switching (Cut-through) Ethernet switches with pass-through switching can be understood as a line matrix telephone switch that is vertically and horizontally crossed between ports. When it detects a packet in the input port, it checks the packet's header, gets the destination address of the packet, launches the internal dynamic lookup table into the corresponding output port, connects the input and output intersection, and connects the packet to the corresponding port, realizes the Exchange function.
Because it only checks the packet header (usually only 14 bytes), does not require storage, so the approach has a small delay, the advantages of fast exchange speed. A delay (latency) is the time it takes for a packet to enter a network device to leave the device.
Its disadvantages are mainly in three aspects: the first is because the packet content is not saved by the Ethernet switch, so it is impossible to check whether the packet is incorrect and can not provide error detection capability; Second, because there is no cache, can not have different rates of input/output ports directly connected, and easy to lose packets.
If you are connecting to a high-speed network, such as a Fast Ethernet (100BASE-T), FDDI, or ATM connection, you cannot simply "connect" the input/output port because of the speed difference between the input/output ports and the need to provide caching; third, when the Ethernet switch port is increased, The exchange matrix becomes more complex and more difficult to implement.
(2), Storage and forwarding (Store-and-forward) storage and forwarding (store and Forward) is one of the most widely used in the field of computer network, the controller of the Ethernet switch will first cache the incoming packet of input port, check the data packet is correct first, and filter out the conflicting packet errors.
After determining that the package is correct, remove the destination address, find the output port address you want to send through the lookup table, and then send the packages out. Because of this, the storage and forwarding method in the data processing time delay, which is insufficient, but it can be entered into the switch packet error detection, and can support the different speed of the input/output port exchange, can effectively improve the network performance.
Another advantage of this is that the switch supports switching between the different speed ports and keeps the high-speed and low-speed ports working together. The solution is to store the 10Mbps low speed packet and forward it to the port via the 100Mbps rate.
(3), fragment isolation (Fragment free) This is a solution between straight-through and storage-forwarding. It checks whether the packet length is 64 bytes long before forwarding (a bit), if it is less than 64 bytes, or if it is a fake package (or a residual frame), the packet is discarded, or the packet is sent if it is greater than 64 bytes. The data processing speed of this method is faster than that of storing and forwarding, but it is slower than straight-through, but it is widely used in low-grade switch because it can avoid the forwarding of residual frames.
Switches that use this kind of switching technology typically use a special cache. This cache is a first-in first-out FIFO, with bits entering from one end and then coming out of the other in the same order. When the frame is received, it is saved in the FIFO.
If the frame ends at a length of less than 512 bits, the content in the FIFO (the remnant frame) is discarded. Therefore, there is no residual frame forwarding problem existing in ordinary pass-through switch, which is a very good solution. Packets will be cached before forwarding, ensuring that collision fragments are not propagated over the network and can greatly improve network transmission efficiency.