Following a series of business-focused technologies, including CRM, ERP, and collaboration, cloud computing has pushed its sphere of influence into disaster recovery scenarios. This is certainly good news for end users. A sharp fall in prices has also made it easier for small businesses, which have not dared to do so in the past, to prepare for a disaster recovery mechanism, as well as to bring richer options to business users.
However, cloud is by no means omnipotent, and for many enterprises, the cloud base disaster recovery is not enough or even completely deviated from the demand track.
To help you develop a cloud strategy that matches your business goals, consider the following five questions:
1. What data needs to be restored first after a disaster?
As CIOs pay more attention to the disaster recovery mechanism, such plans inevitably hook up with big data. Today, employees are manipulating and preserving data at their fingertips to an unprecedented scale.--PC disk, mobile device, cheap USB disk and Dropbox as the representative of the online storage as a means of hoarding information-and correspondingly, the unit cost of the storage function is becoming lower. The cost of storage per gigabyte in 2000 was still close to $10 trillion, now less than 10 cents. Low cost everyone naturally also relaxed, tube it useful useless, save again.
In the big data age, it's clear that we can't back up and restore every byte, at least not for the first time since the disaster happened. After all, the importance of data is different, and full recovery is a waste of time and needless. For the construction Enterprise Graniterock Company, let the Enterprise Resource planning (ERP) software return to normal as soon as possible after they encounter failure of the urgent matter. ERP helps them dispatch capacity and assign trucks to the highest-priority construction sites such as airports to ensure that the driver can deliver the concrete to a designated location before it is solidified.
The company used to be a local ERP solution provided by Oracle, but the maintenance and management burdens of the local solution are hard to digest for Graniterock's overly compact IT team. With that in mind, the Graniterock company has turned decisively to using a managed ERP scheme from velocity. The biggest advantage of the cloud based ERP system is that disaster recovery becomes a function, not as it used to be, as a lengthy, cumbersome and resource-intensive special project.
"ERP is the focus of our business, but we also want to be able to use more supporting classes in our business in the future," says Steve Snodgrass, a graniterock company CIO. The company has recently replaced the previously used multi-vendor storage environment (from EMC, NetApp, Data domain, and buffalo four) as a SAN solution provided by the nimble storage.
As of now, the Nibmle San is still a local backup mechanism, but the operations manager Ken Schipper wants the company to have its own online disaster recovery plan as soon as possible. Only scenarios that can cover exchange, virtual machines, and databases can be called true disaster recovery mechanisms.
2. What are the possible disasters in the location of the business?
When companies turn to cloud based storage solutions, the importance of the region is often not given enough attention. If you do not take the specificity of the location into account when choosing a data recovery service, a major incident like Karilina or Fudao will make a disaster recovery mechanism useless.
"Many technical people are not aware of the ' disaster ' attributes in disaster recovery scenarios, and they overlook the need for corporate employees to evacuate infrastructure and even the location of the company in many cases." People are always pinning their hopes on the cloud service provider, but in fact no one will come to the door to solve the problem if they don't pay, "Ginnie Stouffer, a continuing U.S. registered business expert from IDC Analytics, a business continuity firm in the city of Prussia, said.
"Katrina is a great example to learn from," she added. "Many companies are aware of the importance of offsite backups, but have chosen a remote data center for New Orleans." A lot of stupid banks have even approved such projects, but we know the way things are and we know the dangers of doing so. ”
Graniterock, for example, is headquartered in Watsonville, Calif., about 45 minutes ' drive south of San Jose. This is the area where earthquakes are high, and anyone who is concerned about the news should be aware of it. In fact, the Watsonville local IT infrastructure and velocity in the Seattle created by the data storage center are facing the problem of frequent earthquakes. It may not be possible for an earthquake to destroy two of regions at the same time, but a series of volcanic-related earthquakes could continue to hit, eventually undoing the company's off-site backup programme.
Yes, the probability of this happening is extremely low, but is the nuclear leakage of Katrina and Fudao County also a low probability event? Disaster recovery is a low probability event, you must be aware of this.
"Now we are transferring the information in the Velocity Seattle Data Center to Denver in real time," Snodgrass points out. "Storing data in low-risk areas can effectively improve the security of critical information." ”
Would that be a natural disaster with an extremely wide range of effects that swept through San Jose, Seattle and Denver? Well, of course it's possible. But we estimate that a catastrophe of this magnitude would at least be a world-class event such as an alien invasion or a zombie outbreak. At that time Snodgrass long fled to flee, which still tube you what data recovery.
3. Are you deploying a disaster recovery mechanism or just data replication?
Many of the services and vendor support programs that people remember are not real disaster recovery mechanisms, but simply data replication services. Data replication does have a role to play, but it cannot provide end users with a complete set of infrastructure mirroring generation capabilities. The only thing users can do is get replicated data, but there's no guarantee that the system that matches the data is still intact.
The operating system, application, and user settings are not included in the replication content. Once a disaster occurs, the data cannot be restored after the server and database are recreated, which means that the enterprise needs to endure longer downtime and greater public pressure.
Data replication is of course important to the normal operation of an enterprise, but more importantly, the technician has to be acutely aware that this is by no means the whole of the disaster recovery mechanism. But with the eventual advent of HTML 5 and browser-based, Nirvana Solutions, it is believed that data replication will truly be able to take over the banner of disaster recovery.
But before that, please do not blindly optimistic, the existing disaster recovery mechanism to put into place is the path.
4. Are all the complementary tools for the smooth implementation of the safeguards plan ready?
Cloud base data replication is far more popular than cloud base disaster recovery, one of the major reasons is that the large-scale data needed to transport disaster recovery over the public Internet is too expensive, which requires companies to buy MPLS connections at great cost. It is for cost reasons that most companies still choose to achieve disaster recovery at the physical level, with less effective but cheaper solutions, such as tape.
For the cloud infrastructure disaster recovery mechanism that is truly capable of serving the enterprise, the storage and mirroring capabilities are far from enough. Even if you are hosting your application to a cloud service provider, there is still no guarantee that it will be able to efficiently perform data mirroring and trans-zone transfers.
How can we transmit such massive amounts of data over the Internet without depleting the infrastructure resources? Many companies have found that complementary technologies, represented by CDN or WAN optimizations, are essential to the success of disaster recovery efforts.
"We've been providing WAN optimization services to our customers over the years, but to be honest, that's still a high cost," said Jon Beck, senior vice president of global channel and cooperation at Opsource, a company that specializes in cloud computing and managed management services. The cost of installing hardware devices in each of the infrastructure, most office locations, and other remote sites is undoubtedly unacceptable because of traditional WAN optimization scenarios.
To help customers reduce costs and provide data replication and disaster recovery services to a wider range of consumer groups, opsource the WAN optimization scheme from Aryaka to the cloud environment. "At Opsource, we have confidence in the SaaS model, and Aryaka is just one of many vendors who follow the SaaS model to provide WAN optimization services," Beck says. After careful tuning, all of the current disaster recovery scenarios for Opsource and its customers began to be billed based on actual usage without the need to invest in expensive upfront equipment procurement costs.
5. Preparing a Plan B for offline operations
Even the most perfect disaster response is not guaranteed to be foolproof in the face of real disaster. For Graniterock companies, the large number of building capacity owned by the company can help managers clean up the site and rebuild the organization in time after the disaster. But how do they organize the work?
"Offsite data backup is simply not going to work if disaster strikes and the company's local WAN is not working properly," Snodgrass.
Although Graniterock has long been carrying out online billing and online payment operations, they still keep a lot of paper bills on hand for a rainy hour. They have prepared a large number of paper departure notices and verification notes so that engineers can plan and assign complex concrete deliveries in a manual form when network services are not available.
(Responsible editor: Schpeppen)