InfiniBand Basic Knowledge

Source: Internet
Author: User
Tags switches web hosting virtual environment

The InfiniBand architecture is a " transform cable " technology that supports multiple concurrent links, in which each link can run at 2.5 Gbps. This architecture is at a link speed of four MB/s, the speed is 2 GB/s, 12 links can reach 6 GB/s.

1 Basic Features

InfiniBand technology is not intended for general network connectivity, it is primarily designed to address server-side connectivity issues . As a result, InfiniBand technology will be applied to servers and servers (such as replication, distributed work, etc.), servers and storage devices (such as SAN and direct storage attachments), and communications between servers and networks (such as LAN, WANs, and the Internet).

2 Overview

Unlike the current computer's I/O subsystem, InfiniBand is a fully functional network communication system.

The InfiniBand trade Organization calls this new bus structure an I/O network and compares it to a switch, because the path to the information it seeks to address is determined by the control calibration information. InfiniBand uses the 128-bit address space of the Internet Protocol version 6, so it provides nearly unlimited device scalability.

When transmitting data through InfiniBand, the data is transmitted as packets, which are combined into a single piece of information. This information may be manipulated either by a read-write program for remote Direct memory access, by a channel-accepted message, or by a multicast transmission. Just like the channel transfer pattern familiar to mainframe users, all data transfers begin and end with a channel adapter. Each processor, such as a personal computer or data center server, has a host channel adapter, and each peripheral device has a target channel adapter. Exchanging information through these adapters ensures that information can be reliably transmitted at a certain level of service quality. [1]

3 effects

Why do we need InfiniBand ?

The input/output performance of processors with Intel Architecture is limited by PCI or Pci-x bus. The throughput capacity of the bus is determined by the bus clock (e.g., 33.3mhz,66.6mhz and 133.3MHz) and the width of the bus (e.g. 32-bit or 64-bit). In the most common configuration, the PCI bus speed is limited to a few megabytes per second, while the Pci-x bus speed is limited to 1 GB/s. This speed limit restricts the ability of servers and storage devices, network nodes, and other servers to communicate. In InfiniBand's technical conception, the InfiniBand is directly integrated into the system board and interacts directly with the CPU and memory subsystem. However, in the short term, InfiniBand support will be done by the PCI and Pci-x adapters so that the InfiniBand will initially be subject to bus constraints. At the end of 2002, InfiniBand technology will be fully integrated into Intel server vendors and Sun-produced servers (80% of the possibilities)

who's advocating InfiniBand ?

InfiniBand is advocated by the InfiniBand Industry Association. The main members of the association are: Compaq, Dell, Hewlett-Packard, Ibm,intel, Microsoft and Sun. Historically, InfiniBand represents the convergence of two computational currents: next-generation I/O (Ngio) and future I-o (FIO). Most of the members of the Ngio and Fio currents joined the InfiniBand camp.

InfiniBand is it purely hardware?

No. InfiniBand's success also requires software--including many different layers. This technical architecture and architecture of proxies (servers, storage devices, communications devices, switch and other devices) require software management. Application software must also adapt to this architecture. The operating system must also be tuned to optimize communication with the chipset. We believe that the development of InfiniBand software will become a bottleneck in the application of InfiniBand products. However, by 2005, 80% large and medium-sized enterprises will have formal InfiniBand products in the data center environment.

Windows $ Support InfiniBand it?

Neither support nor support. In the trial and test phase of InfiniBand, support for InfiniBand is provided by the device vendor's driver, not directly by the operating system. Moreover, Microsoft did not have time to add support for InfiniBand to its Windows 2000. But Microsoft is likely to add this support in the second quarter of 2002. (60% of the possibilities)

InfiniBand will it replace the cable channel?

There is no plan yet. Just as InfiniBand technology is fully integrated into the server's software and hardware, it takes time, and it takes time to fully integrate into storage devices and Sans. In 2003, 90% of InfiniBand servers connect to external storage on the network using infiniband-cable channels, infiniband-Gigabit Ethernet, or INFINIBAND-SCSI bridges. (90% probability).

InfiniBand will replace the gigabit (or faster) Ethernet it?

The answer to this question is: No. Ethernet is a technology that is applied to high-level network communications, such as TCP/IP, and InfiniBand is a technology for low-layer input/output communication. Even if the ethernet reaches or exceeds the InfiniBand speed, the high-level network communication feature makes it not a suitable solution for server-side input/output. Summary: The InfiniBand architecture is tasked with improving server-side input/output performance. But InfiniBand is more than just chips and hardware. In order to play its rightful role, hardware and software must be fully integrated in the operating system, management and application layers. According to the level of technology aggressiveness, "A-type" companies will consider small-volume InfiniBand products in the second quarter of 2002, while less aggressive companies may wait until the first quarter of 2003 or later.

If you haven't paid attention to InfiniBand, be prepared to deal with overwhelming information, even products. After years of brewing, and making it a normative effort, as well as laboratory development. InfiniBand is about to be born. Whether you end up using InfiniBand, or give up, or wait and see, you need to understand this new interconnect technology for the data center.

InfiniBand products are gradually entering the market, and a large number of products are expected to go on sale in 2003. The InfiniBand Industry Association (IBTA), created by Compaq, Dell, Hewlett-Packard, IBM, Intel, Microsoft, and Sun in 1999, is attended by more than 180 companies. These industry giants have also formed a steering committee. Since January 2000, a total of $300 million of venture capital has been absorbed, and it is clear that InfiniBand is a major project in the industry.

Many large companies in the storage network community believe that the InfiniBand will be used as a replacement for the PCI bus, first in the server itself. This makes it easy to explain why they are not enthusiastic about InfiniBand interconnection. However, nothing is inevitable. With the start of a series of work, InfiniBand will easily enter the storage network. If InfiniBand is indeed used as a replacement for PCI in the data center, this can happen, or InfiniBand proves that it is equally suitable for network transmission, or that further development is required.

This is the reason to focus on the technology. You may need to invest time, money, and re-planning, but this will potentially improve the company's interconnected architecture.

InfiniBand How it works

The InfiniBand is a unified, interconnected structure that can handle both storage I/O, network I/O, and interprocess communication (IPC). It can interconnect disk arrays, SANs, LANs, servers, and clustered servers, or connect to external networks (such as WAN, VPN, Internet). Design InfiniBand The aim is primarily for enterprise data centers, large or small. The goal is primarily to achieve high reliability, availability, scalability, and high performance. The InfiniBand provides high-bandwidth, low-latency transmission over a relatively short distance, and supports redundant I/O channels in a single or multiple internetwork, enabling the data center to operate in the case of local failures.

If you dig deeper, you'll find that InfiniBand is different from the existing I/O technology in many important ways. Unlike PCI, Pci-x, Ide/ata, and SCSI, there are no associated electronic restrictions, quorum conflicts, and memory consistency issues. Instead, the InfiniBand uses a point-and channel-based message forwarding model on a switched internetwork, and the network can provide multiple possible channels for two different nodes.

In these respects, InfiniBand is more like Ethernet, while Ethernet forms the basis of LANs, WANs and the Internet. InfiniBand and Ethernet are topological independent-their topologies rely on switches and routers to forward data groupings between source and destination, rather than relying on specific bus and ring structures. Like Ethernet, InfiniBand can reroute packets in the case of a network component failure, and the packet size is similar. The packet size of the InfiniBand is from 256b to 4KB, and a single message (a series of data packets that carry I/O processing) can reach 2GB.

Ethernet spans the world, InfiniBand is different, it is mainly used in only a few room data center, distributed in the campus or in the city part. The maximum distance depends largely on the type of cable (copper or fiber), the quality of the connection, the data rate, and the transceiver. In the case of fiber, single-mode transceivers and basic data rates, the maximum distance of the InfiniBand is approximately 10 km.

As with Ethernet switches and routers, InfiniBand can theoretically span farther distances, although in practical applications the distances are more limited. To ensure reliable transmission of data packets, InfiniBand features such as response timeout, flow control, etc. to prevent packet loss caused by blocking. Extending the distance of the InfiniBand will reduce the effectiveness of these features because the delay exceeds a reasonable range.

To go beyond the data center, other I/O technologies must address long-range problems. InfiniBand manufacturers can address this problem through devices that connect to Ethernet and Fibre Channel networks (the maximum distance of Fibre channel is approximately 10 km, so bridging devices enable InfiniBand to be compatible with existing distributed data centers of Fibre Channel-connected campus networks and metro Networks).

Higher speed

The basic bandwidth of the InfiniBand is 2.5gb/s, which is InfiniBand 1.x. The InfiniBand is full-duplex, so the theoretical maximum bandwidth in two directions is 2.5gb/s, with a total of 5gb/s. In contrast, PCI is half-duplex, so the theoretical maximum bandwidth that can be achieved in a single direction for 32-bit, 33MHz PCI bus is 1gb/s,64 bit, 133MHz pci-x bus can reach 8.5gb/s, still half-duplex. Of course, the actual throughput of any kind of bus never reaches the theoretical maximum.

If you want to get more bandwidth than the InfiniBand 1.x, just add more cables. The InfiniBand 1.0 specification was completed in October 2000 and supports multiple connected networks within a single channel, with data rates up to 4 times times (10gb/s) and 12 times times (30GB/S), and bidirectional.

The InfiniBand is ultra-high-speed on the serial link, so cables and connectors are small and inexpensive compared to parallel I/O interface pci, Ide/ata, SCSI, and IEEE-1284. The parallel link has an inherent advantage because its multiple cables are equivalent to multiple lanes on the highway, but modern I/O transceiver chips make the serial link more data-efficient and inexpensive. This is why the latest technology ──infiniband, IEEE-1394, Serial ATA, serial-attached SCSI, USB uses serial I/O instead of parallel I/O.

The InfiniBand is very scalable and can support tens of thousands of nodes in a single subnet, with thousands of subnets per network, and multiple network structures per installed system. InfiniBand switches are grouped by subnet, and InfiniBand routers connect multiple subnets together. Relative to Ethernet, InfiniBand can be managed more geographically, with a manager within each subnet that plays a decisive role in routing groupings, mapping network topologies, providing multiple links within the network, and monitoring performance. The subnet manager also guarantees bandwidth within the special channel and provides different levels of service for different priority data streams. A subnet is not necessarily a separate device, it can be a smart part built into the switch.

Virtual Expressway

To guarantee bandwidth and different levels of service, the subnet Manager uses virtual channels, which are similar to multiple lanes on highways. The channel is virtual, not actually present, because it is not made up of actual cables. The same pair of cables can carry fragments of different groupings by using byte-tuples and depending on the priority.

Standards in development:

Product physical I/O main application maximum bandwidth maximum distance

InfiniBand 1x Serial Storage

IPC Network 2.5gb/s 10 km

InfiniBand 4x Serial-multi-link storage, IPC, network 10gb/s 10 km

InfiniBand 12x Serial-multi-link storage, IPC, network 30gb/s 10 km

Fibre Channel serial Storage

IPC Network 2gb/s 10 km

Ultra2 SCSI 16-bit parallel storage 0.6gb/s 12 m

ULTRA3 SCSI1 6-bit parallel storage 1.2gb/s 12 m

Ide/ultra ATA 100 32-bit parallel storage 0.8gb/s 1 m

ieee-1394a (FireWire) serial storage 0.4gb/s 4.5 m

Serial ATA 1.0 serial storage 1.5gb/s 1 m

Serial attached SCSI serial storage undefined undefined

PCI 2.2 (33/66mhz) 32/64-bit parallel backplane-GB/s motherboard

Pci-x 1.0 (133MHZ) 64-bit parallel backplane 8.5gb/s motherboard

Pci-x 2.0 (DDR-QDR) 64-bit parallel backplane-GB/s motherboard

InfiniBand 1.0 defines 16 virtual channels, 0 to 15 channels. Channel 15 is reserved for management use, and other channels are used for data transfer. A channel dedicated to management can prevent traffic congestion and hinder the normal management of the network. For example, the network is ready to change its topology. The InfiniBand device is hot-swappable and requires the network to reconfigure the topology map quickly when unplugging the device from the network. The subnet manager uses channel 15来 to query the switches, routers, and endpoints for configuration changes.

In addition to the data virtual channel reserved Virtual Management channel, this is in-band management. InfiniBand also provides options for out of band management. In the InfiniBand backplane configuration, the management signal uses a special channel independent of the data channel. The backplane configuration is more used in the server and storage subsystems, as well as the backplane of PCI and Pci-x.

In addition to the direct transfer on the virtual channel, the subnet manager can also adjust the point-to-point channel between two nodes and match the data rate. For example, if a server has 4 times times the interface to the network, and the destination storage subsystem that sends the data has only 1 time times the interface, the switch can automatically establish a compatible 1 time times channel without losing packet and blocking higher rate data transfer.

Implement InfiniBand

InfiniBand does not have to replace the existing I/O technology. But there will be controversy, as other I/O standards have many supporters, and many companies have invested heavily in this traditional technology. In the computer industry, the emergence of every new technology tends to be the traditional category of other technical regulations. At least theoretically, InfiniBand can coexist with PCI, pci-x, SCSI, Fibre Channel, Ide/ata, serial ATA, IEEE-1394, and other I/O standards in the data center. Instead, 3GIO and HyperTransport are board-level interconnects, while fast I/O and dense PCI are primarily used in embedded systems.

To work with other I/O technologies, the InfiniBand needs a bridge adapter that can match the physical interface and convert the communication protocol. For example, Adaptec is testing a disk interface that can connect InfiniBand to serial ATA and serial SCSI. However, do not assume that the bridging device you need already exists, and that the actual work is verified and the price is feasible.

Another consideration is the performance issue. Connecting two different I/O standards typically increases the latency of the data channel. In the worst case scenario, the InfiniBand network is introduced into a network that has a number of different technologies installed, which can degrade overall performance if the organization is poorly managed. InfiniBand's supporters claim that the ideal solution is the complete InfiniBand architecture. Any component can be connected directly to the InfiniBand network, with an optimized file protocol, preferably with direct access to the file system (DAFS).

DAFS is independent of transmission and is a shared file access protocol based on NFS. It is optimized for I/O intensive, CPU constrained, file-oriented tasks in a clustered server environment of 1 to 100 machines. Typical applications include databases, Web services, e-mail and geographic information systems (GIS), and, of course, storage applications.

Other InfiniBand-related protocols that are of interest to IT administrators are: SCSI Remote Direct Memory access (RDMA) protocol, shared Resource Protocol (SRP), IP over InfiniBand (IPoIB), Direct Socket Protocol (SDP), Remote Network Driver Interface Specification ( RNDIS).

The development of the SRP has progressed well in some companies, such as the Adaptec that have developed earlier versions and run protocols on Windows 2000. OEM vendors and partners are testing the beta system. Adaptec that the SRP will be very good for high-performance sans, but it must address the compatibility of multi-vendor products. The final version of the SRP may depend on the driver and server support of the operating system and is expected to be completed in the second half of 2002 or in the first half of 2003.

IpoIB, the IP protocol is mapped to InfiniBand and is being defined by a workgroup in the IETF. IPOIB includes address resolution for Ipv4/ipv6, encapsulation of Ipv4/ipv6 datagrams, network initialization, multicast, broadcast, and management information base. Expected to be completed in the second half of 2002 or the first half of 2003.

SDP attempts to address several drawbacks of other protocols, especially the relatively high utilization of CPU and memory bandwidth. SDP is based on the Microsoft Winsock Direct protocol, similar to TCP/IP, but is optimized for InfiniBand to reduce load. A working group began defining sdp,2002 in the second half of 2000 and completed the 1.0 specification in February.

Rndis is a Microsoft-developed protocol for network I/O for Channel-based Plug and Play buses, such as USB and IEEE-1394. The InfiniBand RNDIS 1.0 specification is nearing completion.

Don't expect one night's success

While all the vendors are hyped, market analysts expect good prospects, but don't expect InfiniBand to succeed overnight or fail. On the one hand, InfiniBand requires continued investment in an unproven technology, especially today's economy recovering from recession. While it is possible to produce more InfiniBand products without making existing devices obsolete, obtaining the maximum benefit requires more extensive support for InfiniBand technology, as InfiniBand technology leverages large switched networks and local interfaces and protocols.

Some IT administrators may object to this network structure that is used for storage, interprocess communication, and network transport at the same time, although this is the purpose of InfiniBand. Running different forms of communication on different buses and networks can provide some redundancy, and InfiniBand can also provide redundancy through multiple point-to-place switched networks. But they believe that a separate network can prevent storage I/O from competing for bandwidth with server and network traffic.

On the other hand, the InfiniBand unified network structure simplifies the work of IT administrators. On the one hand, you don't have to keep different forms of backups. The same cable that connects the server can also work with the storage system, and the disk subsystem can be interchanged between different subsystems. Contention for bandwidth is unlikely to be a problem, because the network structure that InfiniBand can extend provides enough bandwidth and is easily increased. While other I/O technology, once defined the bus width and clock frequency, the theoretical maximum bandwidth is fixed, you can not just plug in the cable to increase the bandwidth capacity, but infiniband to do this.

Similarly, a unified, reconfigured network structure can easily redistribute bandwidth between storage, network communications, and IPC without having to switch off critical systems to replace hardware.

It should be said that the technical basis of InfiniBand is solid. Of course, in addition to technical reasons, new technologies sometimes fail for such reasons. In addition, immature markets and higher-than-expected costs have made many good ideas a ephemeral.

Infiniband the status quo and future

The public standard agreement InfiniBand for server Interconnection, published on the world's top 500 giants at the two-year supercomputer Exposition (SC06), held in Tampa, Florida, November 13, 2006, exceeded the private agreement myrinet for the first time, And in all the giant machines that used InfiniBand, the Voltaire accounted for 2/3 of the absolute advantage. In addition, Gigabit Ethernet, which has continued to rise in the world's top 500 supercomputer rankings by June this year, has also fallen 18% for the first time. So far over more than 3 years, the preferred protocol for the server internetwork in high-performance computing (HPC) has been clearly infiniband. I would like to introduce to you the present situation and development trend of InfiniBand.

Infiniband Features of:

The main features of the InfiniBand protocol are high bandwidth (bandwidth of existing products 4xDDR 20gbps,12x DDR 60Gbps, 4xSDR 10Gbps, 12xSDR 30Gbps, two years after the advent of QDR technology will reach 4xQDR 40Gbps, 12x QDR 120Gbps), low latency (switch delay 140ns, application delay 3μs, a year later the new NIC technology will reduce application latency to 1μs level), System scalability (can easily achieve a completely non-congested tens of thousands of-terminal device InfiniBand network). Additionally, the InfiniBand standard supports RDMA (Remote Direct memory Access), making it more performance, efficiency, and flexibility than gigabit Ethernet and Fibre Channel when using InfiniBand to build servers and memory networks.

Infiniband with the RDMA:

The original intention of InfiniBand development is to network the bus in the server. In addition to having strong network performance, InfiniBand directly inherits the high bandwidth and low latency of the bus. The familiar DMA (Direct memory access) technology used in bus technology has been inherited in the form of RDMA (Remote Direct memory access) in InfiniBand. This also enables InfiniBand to be naturally superior to Gigabit Ethernet and fibre Channel in communicating with CPUs, memory, and storage devices. It can be imagined that the CPU on any server in a server and memory network built with InfiniBand can easily be used by RDMA to move data blocks in memory or memory in other servers at high speed, which is impossible for Fibre Channel and Gigabit Ethernet.

Infiniband relationship to other agreements:

As the network of the bus, InfiniBand has the responsibility to integrate and feed the other protocols into the server at InfiniBand level. For this purpose, today Volatire has developed IP-to-infiniband routers and Fibre Channel-to-InfiniBand routers. As a result, almost all network protocols can be integrated into the server through the InfiniBand network. This includes fibre Channel, Ip/gbe, NAS, iSCSI, and more. In addition to the second half of 2007 Voltaire will introduce Gigabit Ethernet to InfiniBand routers. Here's an episode: Gigabit Ethernet has considered a variety of cable forms during its development. Finally, only InfiniBand cables and fibers are found to meet their requirements. The last million Gigabit Ethernet development Camp directly uses the InfiniBand cable as its physical connection layer.

Infiniband status in storage:

Today's InfiniBand can easily integrate fibre Channel sans, Nas, and iSCSI into servers. In fact, InfiniBand can play a greater role in addition to the integration of other storage protocols into the server as a networked bus. Storage is an extension of memory, and RDMA-capable InfiniBand should become the mainstream protocol for storage. Compare the InfiniBand and Fibre Channel we can see that InfiniBand performance is 5 times times the Fibre Channel, and the InfiniBand switch is 1/10 of the Fibre Channel switch. In addition, the use of InfiniBand fabric when building high-speed networks connecting all servers and storage eliminates the fiber Channel fabric, resulting in significant cost savings for customers.

There has been a lot of progress in using InfiniBand as a storage protocol today. The storage protocol Iser as iSCSI RDMA has been standardized by the IETF.

Unlike the fibre Channel,infiniband, SAN and NAS can be directly supported in the storage domain. The storage system is not satisfied with the network connection architecture of the server and bare storage provided by the traditional fibre Channel san. The Fibre Channel San plus Gigabit Ethernet plus NFS architecture has severely limited the performance of the system. In this case, a parallel file system (such as HP SFS, IBM GPFS, and so on) is created by a server connected to InfiniBand fabric and a iser infiniband storage infrastructure. In the future, the typical structure of the server, memory network will be the InfiniBand will be the server and InfiniBand memory directly connected, all IP data network will be through the Gigabit Ethernet to InfiniBand router directly into InfiniBand Fabric.

In terms of storage vendors, Sun, SGI, LIS LOGIC, and Fei Kang Software have launched their own InfiniBand storage products. In China, new WO technology company also launched their InfiniBand storage system.

From the price point of view, today's InfiniBand is a fraction of the million Gigabit Ethernet. The INIFINIABND has a performance 5 times times higher than the FibreChannel, and in the same order of magnitude as the fibre Channel on the price.

in the HPC beyond the realm of Infiniband Application of:

Over the past year, we have seen InfiniBand have made great strides in areas beyond HPC. This mainly includes the application of InfiniBand in large-scale online game center, the application of Inifiniband in TV media editing and animation production. In the securities industry, people have also begun to develop a high-speed, low-latency trading system with InfiniBand as its core. In banking we also saw some efforts to replace Fibre Channel with Inifiniband.

Ming and the Infiniband The Sights:

Voltaire will launch the Gigabit Ethernet to InfiniBand router in the fall of 2003. This will accelerate the consolidation of the data network storage network by InfiniBand.

Voltaire has developed a full set of Iser initiator, Target code. Many storage partners are using Voltaire's Iser code to develop their own InfiniBand storage systems. We are expected to see more InfiniBand storage systems on the market in the Ming and the After.

InfiniBand has entered the blade server (IBM, HP, etc.) and we will see more effort and success in this area in the next two years.

InfiniBand the effort and success of the standard assembly on the server.

Mount the InfiniBand storage giant machine into the world's top 500 Mega machine list.

InfiniBand in the network game industry, the oil industry, the television media industry, the manufacturing industry and so on enters the enterprise level application.

Infiniband : Highbrow The future is uncertain

Stepping into 2008, the data Center application environment has undergone great changes, multicore, virtualization and blades become the mainstream trend of the next Generation data center, in such a mainstream trend, some people predict InfiniBand will also usher in the golden age of their life cycle, some people disagree

Born with the Golden key

As early as 2001, a foreign expert, Sandra Gittlen, for the first time, wrote an article on what was called an interconnected architecture at the time of network Word's hottest issue of the year.

The following is the opening of the Sandra Gittlen article:

"It's a panacea for network bottlenecks, InfiniBand, the next generation of PC input/output architectures that will be ready to replace PCI as a new standard for servers." ”

At that time, most people gave InfiniBand a good reason to believe that InfiniBand was about to break into the market and would be able to take the lead and replace the data center network in one swoop. At that time, rows of companies were heavily funded. In fact, more than $200 million in venture capital was injected with InfiniBand-related companies at that time. Among the leaders are Dell,hp,compaq,intel,ibm,microsoft and sun, these big companies are working for the development of this technology.

InfiniBand, in the simplest terms, is a high-speed architecture between the target adapters on the server side of a host channel adapter and a storage device, as the direct communication between these adapters, the download, security, and quality of service can be built-in.

Fittest

As time went by 2005 years, the situation changed dramatically.

At the time, many of the startups that had received early attention were either bankrupt, merged, or acquired. Only a few survived. During this period, the same Sandra Gittlen wrote another article for Network World on high-performance computing and data center users with thousands of options for high-speed interconnection.

A surviving InfiniBand company, Mellanox Technologies's vice president of marketing, Thad Omura admits: Although Infinband technology has grown relatively mature, it has been affected by the recession. "In that period of depression, people were not inclined to invest in new connected technologies. ”

Flash again two years, the situation has undergone some changes. Today, we are increasing our investment in interconnection, and we are starting to pick up the cost of using connected technologies to reduce data center or high-performance computing. To this day, I am afraid we have to re-examine this question: InfiniBand time has finally come?

Last month, IDC released a research report on the InfiniBand, in this InfiniBand global development forecasts, IDC said: "It is because of the growing needs of the network to promote the emergence of important business services beyond the previous, Previously, the interconnection between servers and storage resources did not meet the current bandwidth and capacity. As a result, some customers are looking for alternatives to existing interconnected architectures that can fully meet throughput requirements, which require more bandwidth and less latency. ”

IDC also believes that "high-performance computing, scale-out database environments, shared and virtualized input/output, and financial software applications with similar high-performance computing features have driven many infiniband and will bring more infiniband to the market and applications." ”

In the report, IDC forecasts the 2011 InfiniBand product manufacturing revenue to rise from $157.2 million in 2010 to $612.2 million.

Multicore, virtualization, and blades are driving forces

These messages are good news for InfiniBand vendors such as Omura. Omura's customers include HP, IBM, and Network Appliance, which have used InfiniBand technology in their products, adding some confidence to InfiniBand.

Omura spokesman said all the trends are driving the application. "We have entered the age of multicore CPUs and therefore require more bandwidth and shorter latencies; virtualization drives the unified input/output architecture process; The blade server ports are limited but connected to the same backplane, and these trends are accelerating the development of InfiniBand. ”

According to the development of InfiniBand technology, the throughput of InfiniBand is 20Gb per second, which can reach 40Gb per second by 2008. Compared to 10G Ethernet, there will be no change in throughput before 2008.

In addition, the most important issue for the data center at the moment is to solve the latency problem, InfiniBand latency is 1 subtle, and Ethernet latency is close to 10 microseconds.

No wonder Omura spokesman is so optimistic: "If InfiniBand can help us reduce costs, save energy, and get faster processing speed, people will definitely choose it." ”

He takes a trading company, for example, "If you are 1 milliseconds faster in trading, you can reach $100 million a year." The first crab-eating company is the high-performance computing sector, followed by financial institutions. Today we see those database-driven real-time applications, such as ticket booking systems, where InfiniBand products are being widely used. ”

It should be said that the entire IT industry in recent years in the InfiniBand of revenue may not be enough, but product research and development has not stagnated. Another InfiniBand company Mellanox also earned a pot full. "From 2001 to 2005, our incomes are doubling every year," he said. "And he thinks there's going to be a few more months of mass production."

Obstacles to Development

In 2005, when some InfiniBand manufacturers were unusually optimistic about the future of InfiniBand, they bought inifiniband manufacturers topspin Cisco, Communications, argues that IDC's predictions about the future of InfiniBand technology in enterprise applications are "overly optimistic".

"The current InfiniBand application is not enough to explain the arrival of the InfiniBand era," said Bill Erdman, sales director of Cisco's Server virtualization division. ”

Erdman says short latency is the biggest value for InfiniBand products, but other driving forces mentioned by IDC, such as virtualization, are somewhat difficult in infiniband applications.

"Virtualization requires special devices and a well-defined management paradigm, and input/output consolidation requires access between the InfiniBand and the Ethernet/Fibre Channel, and it is simple to assume that as customers add applications and web hosting tiers, they do not require additional firewalls, content load balancing, and network intrusion prevention systems." He said, "InfiniBand is not integrated with these services." ”

Cisco has seen the application of InfiniBand products in databases, back-end database hosts, and information bus programs. "However, if there are other applications that require a richer host service, Ethernet is still the choice of server host technology. "Erdman said.

Cisco is at odds with IDC, so it's hard to say whether InfiniBand products will eventually aggressively attack the enterprise market. But InfiniBand technology has hobbled so long, go so far, I have confidence infiniband can continue to go on. Let's wait and see.

4 Application Cases

Application Case 1 : Big data continues to explode

Big Data solution providers like Oracle Exadata have deployed InfiniBand in their devices for years, and InfiniBand is an ideal switching fabric solution for scale-out applications that require high-speed computing power and a large number of internal data streams.

Users who are deploying open source Hadoop should understand how their big data clusters are becoming more powerful in the InfiniBand network, and the InfiniBand network in the Hadoop cluster doubles the analysis throughput compared to traditional 10GB ethernet, because fewer compute nodes are needed, The resulting savings are typically much more expensive than deploying InfiniBand networks.

As long as data continues to grow in capacity and variety, high-performance data exchange is a huge challenge for data center networks, making it a competitive alternative to high-bandwidth and low-latency infiniband networks.

Application Case 2 : Virtual data Center virtual I/O

To meet the availability and performance of critical applications in a virtual environment, it must virtualize the entire I/O data path, including shared storage and connectivity networks, in turn, I/O data paths must be able to support multi-protocol and dynamic configuration.

To enhance the mobility of virtual machines running critical programs, virtual machines must be able to migrate entire networks and storage to another place seamlessly and quickly. This means that the physical architecture that supports its operation must be equally connected to each host. However, the seamless movement of virtual machines has become a challenge because hosts generally have different physical and physical network connections. converged, flat network architectures like InfiniBand provide a huge pipeline that can be dynamically allocated on demand, making it ideal for tight, high-mobility virtual environments.

Application Case 3 : Scale-out web pages need to be tightly connected

Web-based applications require infrastructure that not only supports virtualized cloud computing resources, but also requires additional mobility and agility to support dynamic reconfiguration without caring for existing data streams or existing connectivity requirements. As a result, InfiniBand's flat address space is a huge and practical benefit for service providers and large Web-based business processes.

For those with a large number of data flows (such as random storage, read-write, remote memory direct read-write, Message Queuing) applications, the equivalent of reducing network latency by half can greatly improve the basic performance and throughput of the application, greatly reducing the requirements of infrastructure and greatly reduced expenditure.

Application Case 4 : High-density device sharing and aggregation

As devices in the data center become denser, users are leveraging the front-end network to match back-end bandwidth to maximize the utilization of assets. In other words, if InfiniBand can be a choice for high-performance, scale-out storage systems, it should also be a choice for storing pre-segment connections.

The recent trend in extended server-side storage requires a infiniband that is better than the advanced cache, which either shares data directly between servers, or tightly consolidates external shared storage devices. The real effect of both storage and server is that it can handle both front-end storage I/O and back-end storage I/O, so the InfiniBand network can play a very prominent role.

Reference Links:

1. InfiniBand

2. Storage Manager must know: InfiniBand network for storing and merging data centers

InfiniBand Basic Knowledge

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.