Resolving network bottlenecks Gigabit Ethernet becomes the key
As IT departments add input/output-intensive and communication-intensive applications to servers, data center networks are facing a significant increase.
Systems installed around 1 GbE are no longer efficient; in many cases, replacement technology: 10 GbE is also a network bottleneck.
Fortunately, there are more than one way to clear up congested network traffic, from low-cost, fast-acting countermeasures to strategic capital investment and restructuring. The new technology is bound to increase network capacity, and the network architecture has overcome the performance bottleneck of the backbone network with the help of multi-channel Ethernet. In some cases, simple organizations in operation can also ease congestion.
Put storage and server together
Changing data streams is a low-cost and quick way to overcome network bottlenecks. One example is to let data flow from online storage to servers.
Google places the online storage nodes and multiple servers in the same place, and then arranges applications to use data from nearby storage nodes. In this way, you can add low-cost ports to the switches in the rack, or even have two switches to allow dual Ethernet ports on the nodes. It is easy to have four or more ports on the storage node, thus eliminating network bottlenecks and allowing quick data access. Almost all data streams in the rack are transmitted through the rack-mounted switch system (with low latency), which greatly reduces the traffic on the backbone network.
Database
Different databases. Today, the most efficient system uses a large number of dynamic random access memory (DRAM) Dual-row in-Memory Modules to build in-memory databases. Ideally, the IT department buys a batch of customized new servers-some of which have a DRAM storage capacity of up to 6 TB, but the old servers can also be used.
The supplementary mechanism of the in-memory architecture is to add the high-speed SSD storage system to the server. This can be used as a buffer for DRAM storage or as a networked storage resource. Both methods reduce network load, but the 10-gbe network may be too slow to keep up with systems that have been bought for less than a year, even with two ports on each server.
Virtualization
Virtualization is common in x86 server clusters and brings about network bottlenecks. As we all know, Boot storm often occurs in saturated networks. Even when running in a stable state, creating instances adds to the burden, because several GB of data is transferred from the online storage system to the server.
In this case, we need to replace the traditional virtualization technology with the container mode. This means giving up the flexibility to create instances for any operating system, but this is usually not a problem.
The container solution reduces network traffic and requires each instance on the server to use the same operating system supported by the container. A single operating system and application architecture saves DRAM space, doubling the number of instances and Fast startup. However, if the applications in the instance are network-intensive or input/output-intensive, the same pressure may occur.
Future technical countermeasures
It is common to connect switches using 40 GbE (four-channel) links. We expect 100 GbE to replace 10 GbE. This trend is soaring because enterprises have deployed four-Channel 25 GbE links and a relatively low-cost 100 GbE Link (for storage devices and switch connections.
With 25 GbE, the data center can use existing cables in the rack and existing cables between switches. Unfortunately, you cannot transform the adapter, but use a PCIe card or a new node. Even so, replacing the rack-mounted switch to build a 10/100 GbE environment as soon as possible will greatly improve the overall cluster performance.
This new technology is rapidly entering the market, reflecting the needs of cloud service providers. Projects that deliver 25 GbE links are generally available for less than 12 months and must pass IEEE certification before they can be listed. NICs and switches used in the production environment will be available in the second half of 2015.
In addition, the 50 GbE dual-channel solution is available. A faster speed is expected to match the larger servers that run big data analysis tasks in the memory. Each server may have at least two links. As the current trend is that these servers and high-performance computing systems have a large number of CPU or GPU cores, data starvation is expected to be a problem, it is also a problem to load a large amount of data into a few terabytes of memory.
Software-based solutions can also overcome bottlenecks. A Software Defined network can distribute the workload on the backbone network to many servers.
As storage and Architecture performance rapidly improves, the network will become the frontier of innovation in the next decade, so the pace of development should be very rapid.
Original article title: Dissolve a network bottleneck with these techniques
[Edit recommendations]
Case study: three reasons for the overall deployment of 100 Gigabit Ethernet Small enterprises to use Gigabit Ethernet three reasons for large data centers to require a new 100 Gigabit Ethernet optical interface [Editor: Lin TEL :( 010) 68476606]