In its purest form, the switching architecture is a network topology in which nodes connect to each other by deploying multiple, effectively linked switches. This is contrary to traditional broadcast media such as Ethernet, the traditional media has only one valid path, but http://www.aliyun.com/zixun/aggregation/14477.html ">ieee and IETF and other standard agencies are improving Ethernet, Add multiple valid paths and link state routing protocols to replace the spanning tree to drive the datacenter architecture deployment.
Before you consider deploying a schema, organizations need to consider these issues:
is the data center of the
enterprise new or expanded? What kind of network architecture does the enterprise want-Ethernet? Infiniband? Fibre Channel? What is the purpose of the data center? How do you design a network architecture? is the enterprise environment simple or mixed? Do you want to route or exchange between different schemas in a mixed environment? What are the factors to consider when merging data and storage?
PayPal uses the Mellanox InfiniBand interconnect technology, deploying more than 300 servers and 12 storage arrays across three InfiniBand cubes, with converged storage and network transmission. PayPal has deployed this architecture since 2008, and now PayPal is migrating from the Gbps double data rate (DDR) environment to the efficient 14 data rate (FDR) infrastructure.
"PayPal was supposed to be considering 10G Ethernet, but we know we have more storage than that," said Ryan Quick, PayPal's chief architect. "InfiniBand provides better bandwidth and latency than Ethernet and Fibre Channel.
Quick said: "InfiniBand gives us a lot of advantages, especially for storage." "It has a 64K packet size, a higher line speed, and a number of different learning and path computing functions, including dynamic routing at the architectural level, and Multipath functions" open the box.
"It has a big problem," said Quick, referring to the InfiniBand architecture, "No one has yet used it in the corporate environment, but it gives us a lot of advantages." ”
In 2008, PayPal deployed the architecture from scratch, with the goal of merging. The company has a hybrid infiniband/Ethernet environment that relies on internally developed routers to connect.
"It's easier to check packets on a 3-tier network," says Quick, "but no provider has a 3-tier network router to connect InfiniBand to other environments, so we have to build this router ourselves." ”
This router has 2 to 4 10G network interface cards (NICs) (for gateways) and a pair of InfiniBand quad Data Rate (8/32gbps) NICs on the other side. The hypervisor is configured to create a virtual switched network-fake Ethernet. "End users think they're using Ethernet, but they're actually infiniband," he says. ”
Storage is connected directly to this "Super cube" via the SCSI RDMA protocol and is configured with dual track for failover.
The only problem with deploying InfiniBand in an enterprise is the lack of relevant hands-on experience or "best practices".
Reference documents such as white papers, as well as vendor professional services consultants, are not very convincing for Eze software companies. The company deploys Qfabric M of the Juniper Network, whose data centers support SaaS products (for financial traders). After extensive testing, Eze company chose the qfabric and its Single-layer/single hop network mode.
"After you read the white paper and the professional services are in place, you are still skeptical," said Bill Taney, vice president of network Engineering and operations at Eze software, "and you will be very firm only if you see it as a seamless failover, just like we did after the actual test." ”
In March of this year, Eze deployed qfabric m nodes, interconnects and director in two major data centers, and four qfabric m interconnects formed the core of this 150G backbone network.
The schema supports 400 physical servers and storage servers that are connected to multiple qfabric nodes of the link aggregation group.
Eze uses the professional services provided by Juniper Networks and has conducted extensive failover testing to verify that the Qfabric/hop topology is as effective as it claims, and the results are pleasing.
"The results are much easier than I thought," he says. "It's a fairly simple deployment, and it's a great way to failover." ”
Eze is currently waiting for the Juniper network to add virtual chassis functionality to its qfabric QFX top-level rack switch, which essentially allows users to configure the schema using Qfabric interconnection.
"If you can get qfabric in smaller deployments without interconnection, this is definitely the solution that I will choose for our small data centers," he says. ”
Another example of architecture deployment is the marketing research company NPD Group. In January this year, NPD deployed HP's smart Resilient Architecture (IRF) technology in a new data center, an environment with 40 switches, and HP 5900 switches in the 10G core.
The company deploys IRF more for fusion management than for fusion storage. NPD is running data and storage on two separate Ethernet and Fibre Channel networks, and the company is evaluating HP's Fibre Channel Ethernet products.
One of the big challenges for NPD now is dealing with big data.
"IRF is really easy to manage and lowers TCO," said Gabe Abrew, senior vice president of NPD Global Enterprise systems, which is very appealing for virtual management of connections rather than physical connections. After all, our biggest problem is getting maximum throughput and processing power. ”
NPD's new data center is about 80 miles from New York, which connects up to 5000 devices, including servers, storage, and network nodes. IRF Virtualization of connection management, while HP's IMC management system provides a single pane monitoring function for the IRF architecture.
Abreu says NPD also has another traditional book centre running OSPF as its unified "architecture", which it plans to overdo in 2015 to IRF.