"At present, it is getting softer", don't think, this sentence is used to describe the development trend of enterprise IT architecture, to summarize the cloud computing era software definition of all the characteristics of the times is not too much.
The imperative of storage virtualization
Of course, this is the demand of many enterprises, they want to make better use of the existing limited budget by consolidating the physical architecture and using more virtual servers to reduce the cost of server hardware. Although many software and hardware vendors have put forward a very mature solution to server virtualization, it seems that traditional virtualization is only a bit out of server virtualization.
In the era of software definition data center, storage virtualization must not be overlooked as an important part of this, and it is becoming another obstacle that the cloud computing era has to overcome. Even server vendors are beginning to shift their positions and carry the banner of storage virtualization.
Because storage virtualization centralized management of fragmented storage resources, both from management level and storage resource utilization, it can help enterprises to greatly improve the efficiency of the overall architecture and greatly reduce the cost. For example, by thinprovisioning, 75% storage investments can be reduced, 90% replication costs can be cut through WAN optimized replication, redundant data can be eliminated, 95% backup storage space is saved, and so on.
Storage virtualization does and does not do
While storage virtualization is not a fresh word for storage, and its functionality does meet the needs of the user, why is virtual storage technology not fully accessible? This also starts with the difficulties that virtual storage technology faces.
Storage-system performance has been a major bottleneck in development. The business system consists of three parts, the host layer, the network layer and the storage layer. Compared to the fact that the development of storage system is the slowest, the most restrictive factor is the performance bottleneck.
Have to speed up
Improved performance and faster reading and writing seems especially important for storage systems, where a technology-disk-based cache technology-is emerging. In fact, the software is built a buffer pool. There's a guard in the pool called Safecache. Responsible for the data in order, and then write operations, and the other HotZone Guard is responsible for the regular walk in the pool of active data organising together, so that they can be read, so that when the command received the entire pool of data to the fastest response.
Have to prepare for disaster tolerance
Of course, thanks to the Safecache and HotZone management of data, such a buffer pool can also prevent data loss to some extent. But what about the security of historical data? This is another worrying problem with storage virtualization. Because the virtual storage put all the data in a system environment, which is equivalent to putting eggs in a basket, once overturned, all eggs will be lost, so for data security issues, more needs a variety of data protection functions.
When it comes to data security, we have to mention disaster tolerance systems, for example, when your production data is incremented, will virtualized disaster-tolerant systems be seamlessly expanded, quickly putting this data into your virtual pool, and ensuring that the production center business needs no changes, no downtime, no investment waste, This is essential for every business.
Have to think about the future expansion
Having solved these challenges, it may seem like a toast to a company that initially deploys storage virtualization products, but without foresight there will be worries. Have you ever thought about the scalability of a storage virtualization product when you chose a performance clearance? A typical storage virtualization product can only work on a limited amount of storage, although it may not appear at the initial stage of deployment, but it can become a worry for you in the future, so it is the right thing to assess scalability in advance.
This requires the construction of a flexible storage architecture, for existing systems, simple access to the storage device through the host. For systems that migrate or repartition on FC switches, the need for a software to intervene between the production and storage devices, so that host access can be made without any modification of the original storage device access, and not delay the day-to-day business, and you will realize that the need to be able to achieve online data migration and hardware upgrades.
Have to consider data recovery
Of course, even if there is an unavoidable logical disaster, what about the recovery? It is also an important guarantee for deploying storage virtualization. A Snapshot Agent function is needed to protect the 100% data integrity of the database and messaging system, ensuring transactional consistency during recovery. Application-aware snapshot agents can be used for all major enterprise applications, including Microsoftexchangeandmicrosoftsqlserver,oracle,informix, Sybase databases, and VMware virtualization platforms. The final snapshot should also be loaded on a virtual volume for immediate recovery of a single file or a volume-level bare device recovery.
So five points, to help enterprises to achieve efficient, optimized IT infrastructure, reduce enterprise costs, strong data services, presumably this relies solely on server virtualization is not possible. Enterprise Deployment Storage Virtualization is a must, but how intelligent deployment, the CIO will need to gradually transform the idea, see more storage virtualization, look for the right way to practice it to soft enterprise only become "hard" reason.