Analysis of layer-3 data packet processing by GE route switch technology (1)

Source: Internet
Author: User

The processing of layer-3 data packets by the Gigabit route switch technology, I insist on writing notes every day. Record your experiences and put more time into your career. In the traditional sense, only the device that processes Layer 2 Data Forwarding is called a switch. The switch only processes and forwards data according to the destination and source MAC addresses of the data packets, it does not involve the content in the layer-3 data packet.

For example, lan Gigabit routing switch technology for Ethernet, FDDI, and Token cup switching. Layer-3 data packet forwarding is completed by the router. For the IP protocol, the router checks the destination and source IP addresses of layer-3 data packets and then processes or forwards the packets accordingly. By the middle of 1990s, due to hardware chip technology restrictions, routers and switches were two independent network devices.

The internal system structure of the router is like a dedicated computer with a master CPU, such as 486 or MIPS, with memory, run software on the CPU to calculate and update packet forwarding and routing. Therefore, poor performance of routers often becomes a network bottleneck.

To solve the performance defects of software-based routers, driven by the new ASIC chip technology, the chip function used to process layer-2 data packets in a vswitch is enhanced to the ability to process layer-3 data packets. This Gigabit routing switch technology with routing functions is called a routing switch.

The backplane of a router switch and its implementation method is the central switch component of the switch. It is used to transmit data between ports of the Gigabit router switch technology. The structure and capacity of the backplane determine the performance of a route switch. Currently, the backplane of a routing switch has three main structures: Cross Bar (Cross Bar), shared memory, and parallel access to shared memory.

The following describes in detail. 2.2.1 Cross Bar) This structure is easy to design and has good scalability, and can provide low cost per port in its basic form. However, it has several key limitations. The three main limitations of the Cross matrix structure and their impact on the network are described in table 1. Table 1

 

Technology of Gigabit Ethernet and gigabit route Switch

The combination of static memory and queue header blocking makes it difficult to forward priority-based services on a port-by-port basis. Therefore, the capability of the Cross matrix structure to provide reliable QoS support is limited, which is inconsistent with the requirement of the whole IP network to improve QoS capability. 2.2.2 the traditional shared memory structure is bus-based.

This structure overcomes the limitations of the Cross matrix backplane and is widely used in switches with a backplane capacity of less than 10 Gbps. In a shared memory bus structure, all ports access the central memory through a shared President. The arbitration mechanism is used to control the port access to the shared port. This eliminates the problem of port-Based Static Memory Allocation and header blocking in the Cross matrix Gigabit routing switch technology and uses the system memory in an efficient way.

The problem with shared memory is that it is difficult to construct an arbitration institution that is fast enough to provide a non-blocking speed of over 20 Gbps performance. For example, the current chip, the technical data bus is generally 64-bit, the president's clock frequency is not the chip's internal clock frequency) is 100 MHz, the system Backplane Performance can reach 64*100 MHz = 6.4 GbPs. Based on bidirectional computing, the system Backplane Performance is 12.8 GbPS.

Therefore, due to the current memory cutting mechanism chip, the shared memory system has poor scalability. 2.2.3 parallel access to shared memory is a shared memory structure design: all ports share a central memory space.

However, unlike the traditional shared memory structure based on the president, the parallel access Shared Memory provides a dedicated mechanism for each port on each module to be written to and read from the central memory mechanism at the same time, this mechanism does not require bus arbitration equipment.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.