Recently, the software-defined storage field frequently showed the phenomenon of "purchasing from everywhere", and the announcement of new products continued to flow. This can't help but ask people how the "Software Defined storage", the IT buzzword that was frequently mentioned in 2013, will influence us?
Although the major manufacturers are promoting "software can help the numerical core storage out of the hardware-dominated constraints", users still need to build a hardware platform to run the software at the beginning and end, help the software define the storage office. Even though there is already a development direction indicating that software-defined storage is moving towards commercial standard styles, the difference between them and traditional standard styles is worth our attention. The effects of Office loads, different basic devices, and standard basic device styles on the actual effect are also different.
More and more products are now launching a shared-type das storage architecture, which is intended to remove the San from the infrastructure and reduce the complexity of the group. However, its effect on reducing complexity is still to be discussed. It is more of a conversion utility. In fact, it does not really remove the San, but only implements improvements to it.
Users blame storage vendors for having no shared cluster architecture in various ways. Generally, such products belong to the "package" solution of software and hardware integration. However, as end users have increasingly experienced demands for deploying their own hardware, they have also created an unprecedented new mechanism.
Once the user is out of the supplier's strict control over the underlying basic equipment, the corresponding storage software must be more intelligent. The end-user execution team needs to learn how to look at the devices and solutions as the supplier's Hardware team looks.
For example, the East-West traffic standard style of the value core becomes very tight. It O & M personnel may find their own needs to deploy low-latency storage networks. The new San network no longer follows the North-South standard and is switched to the server-server (East-West) standard style. From a traditional perspective, these offices were originally the internal significance of the attention of the talent needs of virtualization technology.
What's more, we understand the performance and fault issues: When we try our best to take care of the local Das, should we still use RAID to transform it into a standard pattern of distributed rain (that is, Redundant Arrays of independent nodes? If you want to combine all the storage resources in the core of the value into a large resource pool, the failure of a pure node will give the whole situation: Mind ~ What is the impact of the overall system? Will the performance of the entire child resource pool be affected?
Anyone who holds a role in a technology that has dealt with standard styles of distributed storage has the following experience: the impact of poor performance or even the failure of the node is far beyond our imagination. Sometimes, its expression is very similar to that of the smart ring network at that time. If the pure one interface is not properly configured, the performance of the entire system will be stuck in the trap. In this way, the impact caused by repeated IP addresses can even be neglected.
What are the impacts of pure one computing or storage node failure? What is the status of failures exposed by multiple computing/storage nodes?
In the past, the storage hardware vendor was responsible for these issues, and the local storage team did not need users to join the implementation phase. However, today's Software Defined storage requires us to make a series of decisions and think deeply about how to take care of the value and understand the impact of the copy mechanism.
Theoretically, we all expect that the closer the distance between our own value and the processing mechanism is, the better. However, the value itself is continuous and requires a long period of time to occupy space. The mistake is that we can invent a set of dynamic basic equipment that can help the computer to find the numerical location, otherwise the relocation of the Office will be difficult to avoid.
Suppliers also need to further improve their hardware. In this complex context, the operating mechanism of the storage system is mysterious and obscure. In order to coordinate with it, the software replication function and large-scale commercial basic equipment must go further on the current basis. Otherwise, there is no way to support a wide range of practical applications.
However, we also saw a number of manufacturers also frequently making new moves to launch open-source or closed-source solutions. Large Storage customers also expressed great interest in this regard. Where is the storage change going? Let's wait and see.
Storage revolution? Looking at the mysterious software-defined Storage