Why is it called shallow, because it is his own opinion, subject to the limitations of knowledge is inevitable biased. The real stuff in the data center is summed up in three parts: compute, storage, and networking. The evolution of these three components is: Hardware - virtualization - convergence.
in the traditional era of independent hardware, computing resources is the server, storage is also a separate hardware device, because the server and storage of the mutual connection between the network constitutes. Whether TCP/IP or FC, these parts of the network that make up the connection between the server and the storage become a bottleneck as server hardware upgrades and storage hardware upgrades. Why not get the connection network out of the way, the strongest directly fused together, that is, the server and storage hardware together, and the implementation of flattening. In the hyper-converged era, the data center will be flattened, using a software-defined approach that uses standard x86 servers to flatten the data center infrastructure. The so-called hyper-fusion, in the form of the data center is redefined, rather than storage. But now so-called hyper-convergence, although the emphasis is on the data center computing, storage and network integration, but in my opinion also fused the computing and storage parts.
The most famous of course in this field isNutanix, in .year-push computing and storage integrated machineNXfamily, the hardware platform isSupermicroof the2Udensity-optimized servers that support1-4two-way Intel Xeon node. Software,NutanixAdoptGoogleWell -knownMapReducedistributed computing framework, with similarGFSof theNDFs(Nutanix Distributed FileSystem,NutanixDistributed File System) for computing and storage of distributed clusters.
But in the network section, it needs to configure a single gigabit switch to interconnect each node. Includes the interconnection of multiple nodes within its own 2U server. Are you kidding me? This also dares network fusion. Did not see what the so-called hyper-fusion in the hardware of what is the great fusion improvement, only see its modular customization in the standard x86 server to place different types of host computing resources of the motherboard, plus the standard x86 The server local itself can bring its own server tray. It's flat. Yes, it's true that distributed extensions are awesome. But I'm sorry, the network part has not converged.
All in this way, the difference between the so-called hyper-converged and blade servers is that the blade servers and the convergence of computing resources and networks, hyper-convergence is the convergence of computing resources and storage systems.
the current hyper-converged system is a concept that leverages existing technologies to stack up a technology stack. The computing part adopts the distributed framework and the virtual architecture, the storage adopts the Distributed file system, the hardware uses the million Gigabit network card plusSSDHeapIO. This architecture is now only adaptable to distributed applications, such as virtual desktops. True high-concurrency high-throughputIOapplications that are not well adapted, such asOLTPApplication, I think that there is a need for hardware optimization and optimization of software systems, and as technology advances, hyper-convergence is likely to replace existing traditional storage. And the concept of hyper-convergence at this stage is obvious. Only the real integration of the network into the standard server and flat, so that the real computing, storage and network integration, is the real hyper-fusion. Now the domestic hyper-fusion system, most of the manufacturers are based onOpenstcakorVMwareand theCEPHdeep Optimization, the implementation of their own code is very little, let alone the deeper integration and customization of hardware.
Here we make a hypothesis, if we need to integrate the network intox86Motherboard, is it a small switch that integrates the NIC on the motherboard and then integrates the interconnect internally? Learn about blade server practices? I think the best way is to useInfiniBandtechnology is interconnected internally,in theInfiniBand's technical conception,InfiniBandintegrated directly into the system board and directly andCPUand memory subsystem interactions. InfiniBandis a unified, interconnected structure that can handle storageI/O, NetworkI/O, can also handle interprocess communication(IPC). Both nodes are usedPCIEFlash memory card asFlashcache. UseInfiniBandIt can also take advantage of its technical features, such asRDMA (Remote directmemory Access)Remote memory Access,InfinibandThe purpose of development is to network the bus in the server. Why networking? We know that the bus is shared in the computer, and the throughput of the bus is determined by the bus clock (e.g.33.3MHz,66.6MHzas well133.3MHz) and the width of the bus (e.g. +bit or -decision-making. SoInfinibandIn addition to having strong network performance, it also directly inherits the bus's high bandwidth and low latency. Well-known in the bus technology used in theDMA (directmemory Access)Technology inInfinibandin theRDMA (remotedirect Memory Access)the form of the inheritance was obtained. This also makesInfinibandin withCPU, memory and storage devices are naturally better than Gigabit Ethernet andFibreChannel. You can imagine the useInfinibandon any server in the server and storage network .CPUIt is easy to passRDMAto move data blocks in memory or memory in other servers at high speed, and this isFibreChanneland Gigabit Ethernet is impossible. and usingRDMAwe can build a unified, high-level cache architecture based on all server memory, sub-levelPCIEFlash memory card asFlashcachethe unification of one better than the localSSDthe cached schema. In this architecture, memory is the highest priority and the cache,PCIE Flashis a level two cache, the localSSDis a level three cache. Form a unified, tiered cache pool. This cache pool can offload, rearrange, and optimize networkIOand StorageIO. At the same time the network hardware fusion also needs the software drive, in the software part also needs the real fusionSDN, the real implementation is not just software-defined computing, software-defined storage, but also a software-defined network, which enables true hyper-convergence.
This article is from "I take fleeting chaos" blog, please be sure to keep this source http://tasnrh.blog.51cto.com/4141731/1787755
A brief discussion on hyper-converged architecture