With the development of the times, the maintenance window of IT architecture has been shrinking continuously, because enterprises need efficient and practical effects all day in different time zones. Even though service level agreements (SLAs) are generally comparable with availability, they are difficult to measure from an operational point of view because of the potential overlap of composite SLAs with multiple architectures.
Most cloud computing platforms available on the market, I / O performance are first considered. If the cloud platform can not be relied upon by parts of the infrastructure managed by internal IT groups, the heavy reduction in infrastructure is the best way to reduce the risk of downtime. Even though cloud storage service providers take account of costs while taking into account availability, service level agreements on the market today do not meet the needs of large enterprises.
In high-end enterprise-class cloud computing, storage systems come out of the enterprise enterprise storage solutions, including multipathing, controllers, different fabrics, RAID technology, end-to-end fabric control / monitoring, and proven change management processes . At the low end of enterprise-class cloud computing, storage availability is comparable to the cloud computing platform on the market today. To provide the level of service your organization needs, enterprise-class cloud storage providers must leverage sound architectural design and well-proven innovations.
Master data protection
Master data refers to the data that is run online. Master data can be a single technology, or a combination of a variety of technologies for protection. Some common methods include: RAID protection, multiple copies, remote replication, snapshots, and continuous data protection.
In most cloud computing platforms on the market, the issue of master data protection is often left to users. Today, it is rare to find the above method used in mass cloud computing platforms because of the complexity and cost of the technology. There are some popular cloud storage solutions that reduce the cost by maintaining multiple copies of the data to protect the primary data and the entire system running in a RAID protected storage.
Master data protection for enterprise-class clouds should be based on an internal enterprise-level solution. When the program's Business Impact Analysis (BIA) needs, reliable technologies such as snapshots and disaster recovery should be up and running.
The main difference between internal enterprise solutions and enterprise cloud storage is how master data protection is bundled in the solution. To continue the experience of deploying cloud environments as needed, various options must be packaged so that the service can be deployed automatically. As a result, a range of bundled options are packaged to handle the large number of needs. There may not be a technology that can match customer needs with snapshots, remote replication, and more. Anyway, most users will realize that often they need to sacrifice the flexibility to gain other management benefits in the enterprise cloud.
Secondary data protection
Secondary data comes from a historical copy of the primary data to form a backup. This data protection means reducing data corruption, restoring deleted or overwritten data, and preserving data for business or regulatory purposes for a long time. Typical programs often include backup software and several types of storage media. Data deduplication may be used, but this can create problems in multi-user environments that affect data isolation.
Some commercial and public domain solutions are being added to mass cloud storage to complete secondary data protection. However, vendors of Volkswagen cloud computing platforms rarely package these together with online storage. Although for different reasons, in many cases the issue of service level (SLA) related to recovery time and retention is difficult to deal with.
Whether the solution is private or multi-client cloud platforms, the management tools, visibility and level of service recovery are the keys to secondary data protection. Once the recovery request is submitted, the start of recovery should be done directly and automatically. Users should be able to handle predictable recovery performance levels (recovered data GB / duration) and should be able to choose how long to keep from a short list of options. Finally, users should be able to check the status of those online backups. Because frequency and retention dictate the resources required to store the backup-that is, costs-customers should be able to observe resource usage and charges online, lest they be surprised at checkout.