Today, most businesses are fully computer-managed, and all data is stored in computer systems. Therefore, the continuous use of computer systems and security reliability, in the entire enterprise operation process plays a vital role. If the system does not run properly for a period of time, the impact will not only be local, it may cause significant losses to the entire enterprise operation. At present, only a few enterprises have established disaster recovery backup system. Why do most companies not have a disaster-tolerant system? For three reasons:
"It's not going to happen in our company!" For most it executives, it is assumed that such occasional events will occur only in other companies, not in the companies under their own jurisdiction. There is therefore no consideration in this regard.
"We can't afford a disaster-tolerant system!" Due to the tightening of it investment, there is a large gap in the establishment of a truly reliable disaster recovery system.
"Inability to coordinate downtime for various departments to implement disaster-tolerant systems" due to the implementation of some disaster recovery programs, it requires a long period of system installation and commissioning, which makes it difficult for IT managers to coordinate the downtime of various departments.
The first factor is man-made, subjective-conscious, and non-technical forces can solve. The latter two factors can be overcome through the development and use of new technologies. Perhaps you've heard of a new technology-storage virtualization technology that can reduce the amount of input and complexity involved in the implementation of traditional disaster-recovery programs.
I. Traditional disaster-tolerant programme
Figure 1: Typical disaster-tolerant scenarios for arrays
Let us first analyze the traditional disaster-tolerance scheme as shown in Figure 1. As is known to all, the traditional disaster-tolerant program has its advantages, but for most enterprises, the implementation of this system is too expensive. What is the reason for the expensive cost of such a scheme? There are two reasons: hardware input and management input.
Hardware input
If you want to back up your data to another storage system (in the same building or offsite), there are two ways to do it: Connect directly to two disk groups, or send two write instructions from the host using mediation technology. The disadvantage of the second approach is that additional load is added to the application server's operation. At the same time, the investment of complex heterogeneous system management is increased. It is not difficult to see that for host-based, the only effective disaster-tolerant solution is the Array-to-array solution, the construction of this program is very expensive. Today, if you want to achieve mirroring between disk groups, you need to use a few specialized devices. And, in the implementation of the program, the use of the same two sets of equipment (different brands of hardware devices, generally can not directly carry out data transmission), a set of storage using data, the other group to store backup data. This is not just an increase in investment, but it can be said to be a waste.
In addition to the disk group input, the input of external accessories is also very high. The implementation of the traditional scheme, can use the optical fiber through the dedicated point-to-point connection, can also convert the disk data stream to conform to network protocol (for example, TCP/IP network protocol) network data flow, through high-speed public Internet transmission. Both of these methods need to consider the additional input of a dedicated network device. Needless to say, there is a limit to the transmission distance for point-to-point optical connections.
Management input
According to the different network frequency selection between the data center and Disaster Tolerance Center, the operating cost of disaster tolerance scheme will be very high. Most of the existing disaster-tolerant schemes need to establish a special broadband connection channel between the two centers to reduce the delay between the two places and ensure the synchronization of the data in the mirror operation. Compared with the operating maintenance cost in the process of disaster-tolerant scheme, the initial high input of hardware equipment will appear insignificant.
For a large joint venture, if you want to establish a unified disaster recovery center. Under the traditional solution, it is also very difficult. Enterprises have to according to each branch, each department, the use of storage equipment, configuration disaster center equipment, for different equipment, matching different software, equipped with different management personnel ..., this makes the data center and disaster-tolerant center hardware and software configuration consistency requirements are very high. Obviously, this is not an economical and practical solution.
In addition, the company's existing data volume is very large, disaster-tolerant system after the establishment of data migration and initialization time is very long, users can not use the normal computer system in this time. This makes it more difficult for IT managers to reconcile the implementation time of the program.
Ii. The birth of a new programme
Choose this expensive, high consumption of the program, the company's disaster-tolerant budget will be greatly increased. Enterprises are in urgent need of both safe and reliable, but also save the investment of new programs. The disaster-tolerant scheme based on storage virtualization technology emerges. The new virtual disaster recovery solution, using the network's existing transmission capabilities, will be transferred to the remote connected storage systems, all of the above mentioned problems will be solved.
By moving data replication functionality from disk groups to network-based, Universal Central storage Service programs, data replication is no longer dependent on two expensive, identical disk groups.
This generic service provides the foundation for creating a unified managed storage network, and allows users to select the best disk array storage to use data, select inexpensive JBOD or other low input hardware storage to replicate data. It is not necessary to require users to purchase a second set of expensive disk arrays, which solves the problem of high input in the use of high efficiency disaster tolerance system.
Another benefit is that you can use the network functionality features that are owned by the common operating system. The dedicated storage disk array, with its own operating system, is limited to data communication between the universal networks and can only communicate with disks, or with similar systems, through disk-based communication protocols, like fiber and escon. Unfortunately, long-distance transmission based on public networks requires more reliable network protocol support, which means that TCP/IP and the disk array themselves do not have the characteristics of intelligent network transmission. In contrast, the common operating system and hardware platform have advanced network transport capabilities. The conversion between the storage Protocol (FC,SCSI) and the Network (LAN/WAN) protocol can be implemented without the need for an external transfer device. Discarding the storage conversion device greatly reduces the hardware input in the process of disaster tolerance system implementation.
Third, asynchronous mirroring technology
The final difficulty of the Storage virtualization Disaster recovery scheme will be realized by asynchronous mirroring to solve the requirement of the connection frequency band and reduce the running cost. It is easy to see that the difference between synchronous and asynchronous replication is how to guarantee the response time of the output and the receiving end. Of course, from our intuition, it is very important to ensure that the transmission is synchronized. However, from a physical, economic point of view, sometimes it is difficult to achieve. Distance, performance, and budgeting are important factors in making decisions. Obviously, using asynchronous backups, disaster-tolerant centers will not be able to reflect the final raw data. In fact, remote backup data will have some lag in time, depending on the amount of change in the original data and the bandwidth between the two. The requirements for lag are different for different applications and different users. However, for most disaster-tolerant applications, the lag caused by asynchronous backups can be sustained. If a disaster occurs, the backup data lags behind the original data for 10 minutes, always good too much for no data to keep.
How to apply the virtual technology to realize the disaster disaster-tolerant solution proposed above:
1. Virtual drives provide mirroring and disk-network conversion protocols, replacing storage conversion components and high-end disk controllers.
2. In addition to capacity requirements, there are no special requirements for disk arrays at both ends. Allows users to select storage devices based on actual requirements and budgets.
3. The asynchronous backup function can be based on a cheap T1 connection or on a OC768 high-speed network connection. The only difference between the two is the difference in lag time between the two data.
Figure 2: Disaster tolerance scenarios with virtualization technology