The new IT wave has arrived, the Super fusion is one of many tuyere, in recent years has become the IT industry attention topic. The current hyper-fusion architecture has become the main direction of many manufacturers exerting force, the market competition is increasingly fierce. Although the concept of hyper-fusion has been hyped for at least two years, there are still a lot of vague definitions and understandings on the market, and there are differences in performance and profitability between hyper-fusion and traditional it architectures. This article will take a detailed inventory of the differences between hyper-fusion and traditional it architecture, and through the manufacturer's example to carry out in-depth analysis. one. What is the core feature of hyper-fusion.
Nutanix is the first manufacturer to promote the concept of Hyper fusion (Hci:hyper converge infrastructure). The core technology of Nutanix is distributed storage, but there is a further innovation in the deployment architecture, which incorporates this approach. In fact, this innovation in the technology itself is not a big difficulty, but to a large extent, to promote the distribution of storage market landing.
So this is the core of Hyper Fusion 1.0, but many people on the market are "fused" two characters confused, and even some manufacturers, but also to weaken the most challenging and technical difficulty of the storage part, simply to tune the "fusion", the content of some innocuous integration, and this is not really "super fusion."
Really pry the traditional IT market Super Fusion 1.0 is characterized by two points: first, distributed storage based on X86 server architecture; the second is distributed storage and computational virtualization deployed within the same server hardware.
As you can see in the contrast below, about 70% of the important features that bring value to users are distributed storage, 30% are the benefits of a hyper-fusion architecture (such as management simplification, reduced usage costs), but it's the 30% that makes users more willing to switch from traditional architectures to distributed architectures. two. is hyper-Fusion a revolutionary architecture? Who lives.
Anything that is called a revolutionary architecture should typically have at least three features: a huge change in user habits and great value for users; New vendors can pry into the market of traditional, established manufacturers.
Above three points, Super Fusion products are available, the first two points in the following separately, the third can be through foreign Nutanix, VMware (VSAN), China's smartx and other manufacturers of the case description, they are replaced by a large number of old storage vendors EMC, HDS and other companies storage products. Some people will say Nutanix not also out of their own server virtualization products. This is not a replacement for VMware. But this is a later story, the original Nutanix product core is the hyper-fusion architecture of distributed storage. User procurement Nutanix do not need to replace the original server brand (such as Dell) or virtualization brands (such as VMware), only storage (such as EMC) must be replaced. This is an important starting point and point of entry. If the initial Nutanix is to replace storage, virtualization and even server hardware with a hyper-fusion idea, there is little chance of success. three. Why the hyper-fusion architecture will only appear in recent years.
A revolutionary technology architecture emerges and matures with at least two core factors: strong customer demand and maturity of related technologies.
The hyper-fusion architecture of customer demand comes from business digitization and the internet has dramatically increased the speed and usage requirements of IT resources, and the following related technologies have a lot to do with the market landing of the architecture.
1. Distributed Storage Architecture
This is actually in the Internet company for many years, distributed storage can be well based on X86 server to build a scalable, highly reliable storage resource pool, is the basis of the super fusion.
2.SSD
There is no doubt that the impact of SSD on the storage architecture is enormous, the 4K randomness of traditional mechanical hard disks can only be about 300, while SSD like Intel 3710 can reach more than 70,000 IOPS, directly higher than two orders of magnitude. But at the same time the dual-controller architecture becomes a bottleneck, such as EMC's unity 650 can support 1000 hard drives or SSD, but 31 SSD to reach the bottleneck, at this time 8:2 read-write mixing maximum can only reach 270,000 iops.
In the same way, SSD greatly reduces the rack space of transaction-type storage System, makes the number of storage and compute nodes match, which is an important condition for the establishment of hyper-fusion architecture.
3. Virtualization
Another important condition of the hyper-fusion architecture is that virtualization has been widely accepted, otherwise distributed storage is not likely to exist in a single physical node, unless it is a manufacturer's All-in-one product.
4. CPU
Finally, it is often said that the CPU is more powerful and cheaper, you can solve both the calculation and storage requirements. Four. Hyper Fusion is the simple installation of distributed storage and computational virtualization.
The answer is in the negative. Because you want to do virtualized storage in a hyper-fusion form, should also have the following characteristics: One, the system resource consumption should be less, and controllable, reasonable scheme should not exceed 10%; second, support VM data access localization, this is the Super Fusion architecture another great advantage; third, since it is the focus of support for virtualized platform storage, You should have better support for different virtualization platforms, like VMware, KVM, etc.
So, now the Chinese market is really a super Fusion architecture is Nutanix, VMware and SmartX. Many domestic manufacturers take a ceph packaging Super Fusion products, the above three points can not do a good job. The main problems include: hashing algorithm can not achieve accurate data storage control, so I/O localization can not be implemented, software I/O path is relatively long, CPU resource consumption is more, further increase the delay. Five. What is the difference between a hyper-fusion architecture and a traditional architecture?
Here we have selected the SMARTX architecture chart to make a contrast explanation. For real hyper-fusion products, the following features are consistent. If there is a small number of feature differences, it is specifically described.
Six. Why the reliability of the hyper-fusion architecture is better than the traditional dual-control architecture.
Building storage with servers, customers are most concerned with reliability first, and if you need to measure reliability:
Redundancy of the system. The popular saying is to allow the hardware is bad. Whether to recover completely automatically after a failure occurs. Recovery speed and time. Because the system is in a degraded state is a more dangerous state, the smaller the failure window, the less the likelihood of an overall failure.
A detailed comparison of system redundancy and recovery mechanism is given below.
What needs to be further explained in the table above are:
Three replicas can bring a better redundancy than double control, at the cost of the loss of more capacity.
The granularity of management, different from each other, some are based on the resource pool to set up two copies or three replicas, and some vendors, such as SMARTX, is in volume, the advantage is that you can assign different replica policy volumes on a resource pool for different security level VMs.
Hot spares processing, in fact, most of the distributed storage is not, but the use of existing space for recovery, but the need to emphasize that now glusterfs or use the hot spare mechanism.
You can see the reliability aspect, the Super Fusion architecture advantage is very big, but in fact the advantage here is the distributed storage itself should have. Seven. Why the randomness of the hyper-fusion architecture is higher than the traditional architecture.
The architecture advantage of hyper-fusion in performance is also very obvious, of course, the cost is to consume computing resources, so the consumption of computing resources is an important factor to test the specialty of Super Fusion.
Detailed performance mechanism comparisons are given below:
Performance-related features, multi-node concurrency and performance expansion are still brought about by distributed storage, and data localization is unique to the hyper-fusion architecture, and the discrete architecture is not achievable, of course, the cost is to compute the resource occupancy. SSD is used even by traditional schema storage, but dual-control cannot perform SSD performance.
For example, EMC Unity 650f,8:2 8K mixed Random read-write performance of the maximum 270,000 ioPS, and for similar Nutanix, SMARTX and other better performance of the Super Fusion products, a node 8:2 8K mixed Random Read and write can easily reach more than 30,000 IOPS, through linear expansion, 10 or so nodes can achieve the maximum performance of EMC Unity 650F, while 10 nodes are only a very small user size. Eight. Why the hyper-fusion architecture is more scalable than the traditional architecture.
Scalability is one of the biggest advantages of distributed architecture, of course, after the expansion of the data automatic load balancing effect is not inevitable, to see the degree of production of manufacturers.
Detailed system scalability comparisons are given below. Scalability benefits are still brought about by distributed storage architectures.
Nine. Why the deployment of a hyper-fusion architecture is simpler.
The simplified effect of the operation dimension of the hyper-fusion architecture is very significant, and the following are deployed in comparison with operation and dimension from several angles. Some of the advantages are distributed architecture, such as maintaining only standard business services hardware, and the use of hyper-fusion can further reduce the demand for hardware.
10. How to save money for a user in a hyper fusion.
There's a lot of talk on the Web, but the following example may be more accurate and quantifiable, comparing the cost of use. There are no costs such as maintenance costs, renewal costs, etc. In terms of human cost, removing professional storage operation can save a large portion of staff costs. According to the actual experience value, the use of hyper-fusion in the personnel input reduced by at least 50%. Procurement costs, super fusion products can generally achieve the traditional architecture 70%.
The following is an example of the comparison between the use cost and the benefit.
11. Super Fusion is not suitable for any scene.
The traditional sense of the super fusion is suitable for a variety of virtualization scenarios, for bare metal server is not suitable, so there are some heavy load of applications, in the case of virtualization is not necessary, nature will not consider the super fusion. In addition, the hyper-fusion architecture model determines that computing and storage resources should be balanced growth, otherwise it is not very suitable, such as massive unstructured data.
However, a recent trend is that the hyper-fusion vendors are starting to provide storage interfaces that can be accessed in a similar way to a server San or NAS, so the bare metal scene can also be applied.