Today's exchange network structure
With the rapid development of technologies, the wide application of ASIC chips and network processors, the popularization of optical fiber, and the rapid development of layer-3 switches, Ethernet switches are gradually moved from network edge applications to network center applications. In the traditional sense, LAN switching technology has become the mainstream networking technology for campus networks and man networks. In this context, today's Enterprise Network Architecture Model basically uses two or more high-end L3 switches as the core devices, the optical fiber serves as the backbone to connect the middle and low-end switches at the edge of the network, while the edge switches connect to the desktop terminal system through 10/100 M.
For enterprises facing increasingly fierce market competition, regardless of their scale, information processing and network communication systems are used as infrastructure and production tools to improve production efficiency, this will further promote the improvement of enterprises' core competitiveness. In this mode of enterprise operation that relies heavily on the network, how can we ensure that the network is not faulty? How can I solve the problem quickly? How can I expand the network as the business expands? These require the enterprise's network to have high reliability, rational executives, and flexible scalability. Currently, CEN provides the following solutions:
Network Center
The Network Center emphasizes the high exchange capability and high reliability of the equipment. Therefore, most of the network centers are composed of two rack-mounted core switches with high performance and high reliability. The equipment itself is redundant backup, VRRP technology is used to implement gateway backup between vswitches and network edge. Rack-mounted devices have excellent scalability, and can meet the expansion needs only through plug-in cards.
Network edge
On the network edge of the enterprise network, the terminal access function is mainly completed. Port density and scalability are important, and reliability cannot be ignored. Considering the cost, a box switch is generally used. Port expansion can be performed through stacking or cascade connections, and network link reliability can be improved through STP, RSTP, MSTP, and other methods, link aggregation technology is used to increase uplink bandwidth, improve performance, and improve the connection reliability between the center and edge.
Network Management
Most of the current network management is implemented through SNMP-based network management software. For small and medium enterprises, many of them use TELNET or WEB network management to directly manage devices. Large enterprises use independent network management software or platforms to manage the entire network and equipment.
Deficiencies in traditional networking
Reliability
In the existing networking mode, the reliability is ensured through device redundancy and link redundancy. Technically, VRRP, STP/RSTP/MSTP are collectively referred to as STP) and link aggregation methods are used to improve the reliability of devices and links in the network.
Although VRRP and STP technologies can meet the reliability requirements in most cases, they still have some defects. This is mainly because the implementation of these technologies does not take the mutual standby devices as a whole, but is integrated into the concept of the active/passive mode, with excessive emphasis on redundancy, therefore, there are deficiencies in load balancing.
In VRRP technology, all network functions of each device involved in VRRP are independent. Only the gateway for a VLAN is active and standby. Normally, only the active device provides the forwarding service for the VLAN data, and the backup device is completely idle. This not only forms a situation of busy and idle, but also creates a great waste of backup equipment. For this problem, the traditional solution is to point different VLAN gateways to different central devices through planning, such as the central switches A and B as VRRP, where A acts as the primary gateway of vlan1, B is the backup gateway of VLAN1. For VLAN2, B is the master gateway and A is the backup gateway. In this way, the data stream is manually distributed to two central switches to achieve load balancing. Of course, the disadvantages of this sharing method are obvious, not to mention that the premise must be that there are multiple VLANs to work, even if there are multiple VLANs, the data flow size of each VLAN is also different, the pressure on the central switch is also completely unbalanced.
STP also has the same problem. Under normal circumstances, only the master link is transmitting data, and the backup link is completely idle, which does not play a role in load balancing. In order to solve the problems of STP protocol, the subsequent RSTP/MSPT technology not only requires good planning, but also complicated configuration, which is not conducive to implementation or maintenance. Although link aggregation can expand bandwidth and share load, the same end of multiple physical links can only be the same vswitch. In this way, only the link load can be shared, not the device load, in addition, the device will also form a single point of failure.
Management
In today's networking mode, you can manage network devices in multiple ways. From the bandwidth perspective, it can be divided into in-band management and out-of-band management. Out-of-band management includes WEB and network management software SNMP) in-band management, including management through the control port of the device.
Because out-of-band management can be remotely managed, it is more frequently used. However, the device must have a unique IP address. In the traditional mode, in addition to some stacking methods, most devices that are managed by the network must be equipped with IP addresses. This may not be a small number of core switches, but for a large number of edge switches, not only increases the complexity of configuration, but also occupies a large amount of address space. Even for switches that work in the stack mode using an IP address, the management interface still shows the switches that are stacked together, rather than a real whole, the specific configuration is still separately configured for each vswitch.
Networking costs
In today's exchange mode, the Network Center is a rack-mounted switch, and the edge is a box switch. Because existing technologies cannot solve the load balancing problem while implementing redundancy, when users buy equipment in the initial stage, the performance of a single center switch must meet the data stream forwarding requirements of the whole network. Considering the future development, the performance requirements for devices are much higher than the actual requirements for network construction. In this way, the central device needs to purchase the entire rack at a time, which increases the cost of networking. In terms of reliability, the same equipment needs to be purchased only for backup, greatly reducing the investment efficiency. When the performance of the central device cannot meet the development requirements after that day, you can only purchase new core devices. In this case, the performance of the original device in the center may be insufficient, at the edge, performance is too high. Investment Protection is not good.
Problem Solving
From the problems mentioned above, in the current enterprise network mode, switches from the center to the edge are all isolated, such as the two switches in the center, both data packet forwarding and routing protocol processing are different. It does not form a whole. The connection between the two is simply through VRRP and STP technology to achieve redundancy and mutual backup. These technologies allow the two vswitches to work only one packet on the same network segment, which greatly wastes device and link resources. This is actually a centralized load network.
To address this problem, we hope that there will be such a technology that can be logically grouped into one whole for multiple devices with the same functions, so that multiple devices can share the load in normal operation, when a device or link encounters a problem, other devices and links can take over the load of the faulty device without affecting the normal operation of the service. To expand the network, you only need to add a switch to the entire logic. In terms of management, the overall logic is also completely represented as a device. This technology will greatly improve the reliability, scalability and management of the network. It also reduces the Initial networking costs. This is the IRF distributed networking technology released by 3Com.
IRF distributed Networking Solution
IRF Technology Introduction
IRF stands for the Intelligent elastic architecture (Intelligent Resilient Framework). It is a brand new solution. Multiple devices supporting IRF can connect to each other to form a "combined device ", this "Federated device" is called a Fabric, and each device that makes up Fabric is called a Unit. After Fabric is composed of multiple units, both management and use become a whole. It can expand the number of ports and exchange capability of devices by adding units at any time, and enhance the reliability of devices through mutual backup between multiple units; in addition, Fabric is managed as a device, which is convenient for users.
To put it simply, IRF devices are connected by multiple units to form the product features that users urgently need, such as ease of management, scalability, and high reliability, it is a network device different from all existing devices in the industry.
IRF technology consists of three parts:
Distributed device management (DDM): It is a control system of IRF technology. It is responsible for publishing various management and control information to IRF distributed switching architecture.
Distributed elastic routing (DRR): it enables multiple interconnected switches in an IRF distributed switching architecture to work like a routing entity and intelligently distribute routing loads among all switches, this maximizes the routing performance of the network.
Distributed Link aggregation DLA): it can achieve full network interconnection between core network devices and edge network devices.
IRF has the advantages of high availability, high performance, easy management, and optimization of IT budget. In addition, vswitches that support IRF can interoperate with existing vswitches that do not support IRF. Although vswitches that do not support IRF technology will not become part of the IRF distributed switching center, these switches can still be managed as an independent whole through link aggregation technology, spanning tree protocol, or link redundancy technology, and the redundancy configuration is still effective.
IRF technology can build a network core with high availability and scalability, and its performance, configuration capabilities, and scalability can all grow in sync with the network, this avoids the need for centralized network core devices to face a large one-time investment and physical restrictions. Therefore, IRF helps enterprises reduce their total cost of ownership through a brand new on-Demand Purchase and progressive expansion strategy.
IRF technology fully reflects the organic combination of distribution and integration. Fabric members are independently processed on Layer 2 and Layer 3 data forwarding, Layer 2 protocol, and routing status. These benefits both reliability and integrity. In terms of the outside world, all Members are in a group, whether in routing protocol, layer-3 packet forwarding, or management, they all act as a device and share an IP address together, centralized configuration and concentrated log output.
Networking:
|
IRF network-wide solution |
This is an IRF network-wide solution. All vswitches support the IRF function. The two core switches, server access switches, aggregation layer switches, and access layer switches are all in the IRF architecture and are connected through a bilinear link between different layers.
From the figure above, there seems to be no big difference in the topology between IRF networking and traditional networking. At most, there are some connections between different layers. However, these seemingly complex connections bring the reliability and overall forwarding performance of the network to a new level, but management is simpler than traditional networks.
In the IRF networking mode, network devices at different layers use a two-regression link connection. Although there are many physical connections, the LACP group is formed according to the irf dla technology, this enables the IRF architecture to improve the overall forwarding capability of the network while ensuring the reliability of the network. In addition, with the development of business in the future, new IRF switches can be added to each IRF Fabric, so that the core and edge of the network can continuously improve performance as needed. This is also the first networking method in the industry that can simultaneously benefit network reliability, scalability, and network performance through one technology.
DLA Distributed Link aggregation) not only ensures that the line does not have a single point of failure, but also allows one end of LACP to be connected to different physical units in Fabric, this avoids the risk of all link interruptions caused by single point of failure (spof) of the device, and gives full play to LACP's link Load Balancing advantages.
With a good network, good management is required. In terms of network manageability, the IRF switch has inherent incomparable advantages. Its DDM distributed device management feature combines distribution and centralization, and uses Fabric composed of independent IRF switches as a device for centralized and unified management. Whether it is managed through the network management software, WEB, TELNET, and control port, you can see a device. You only need to configure Fabric for configuration, you do not have to configure each device separately, as shown in figure 2. Fabric also provides centralized and unified log output. Therefore, in the logic and network management view, each network layer has only one vswitch, as shown in figure 3, which greatly simplifies the management interface. Each Fabric requires only one management IP address. In fact, even though different Unit control ports in each Fabric are managed, the management interface of the same device is displayed. Undoubtedly, in this way, not only is the network topology greatly simplified, but the number of devices to be configured is also greatly reduced. This brings great benefits to the early installation of the network and maintenance in the future.
|
Distributed architecture and centralized management |
|
IRF network-wide solution management view |
Compared with traditional networking methods, IRF has more advantages in networking technology. Its networking principle is "gradual expansion and purchase on demand ". Since the increase of Unit in IRF Fabric can improve the overall data forwarding capability of Fabric, in the early stage of the user group network, you do not have to consider future development and other factors to purchase expensive core switch devices that far exceed your current needs. However, you can select a core switch that supports IRF. When business development requires expansion, You can dynamically add new IRF switch devices to form Fabric to improve overall performance. IRF technology allows each Fabric Unit to dynamically add and remove hot swapping like a rack-mounted switch), thus achieving seamless network upgrade. This networking technology, which only needs to pay for the current needs, will save a lot of equipment investment for users.
Benefits of IRF distributed networking
Distributed features
When IRF technology enables distributed devices to be combined into a Fabric through IRF technology, they can appear in a completely new device form, and IRF switches can be seamlessly added to IRF Fabric at any time as needed, this provides a new networking mode with high reliability and scalability.