9 Essentials for cloud storage

Source: Internet
Author: User
Keywords Elements cloud computing should elements cloud computing should
Tags access advanced application applications backup based basic block
Of all the recent concerns about cloud computing, storage is more viewed as an underlying platform. Today, many cloud computing offers only a collection of CPU cores, quantitative memory allocations, low speed storage, or some Internet-facing IP technology. Recently, there have been interesting advanced technologies related to cloud computing and storage, especially the use of Web Access, which makes access storage no longer restricted to device files or NFS mount points.


  


's "Enterprise-class features" of typical data storage and management are constantly being pushed into new IT architecture innovations. Storage architects are aware that these features are important for critical business and production applications, but the current cloud computing lacks these features. The goal of this white paper is to describe the 9 essential elements of storage in enterprise cloud computing.


  


element 1: Performance


  


performance requires cost. In a well structured application, performance and cost are in balance. The key to this is the use of appropriate technology to match the performance of business applications, first of all, the enterprise's business language into it mode. Because of the difficulty of this transformation, businesses are often stuck in static it architectures and unable to cope with changing business performance requirements. Enterprise Cloud Computing provides a platform that is more responsive to changing performance requirements.


  


in the early cloud computing platform, storage I/O generally has a high latency. This is because vendors note that data on cloud computing is easier to access, but do not notice increased service levels associated with performance, bandwidth, and IOPS. There are two reasons for higher latency: the mode and type of access, and the configuration of the storage distribution.


  

The
access pattern includes a combination of multi-tier protocols (such as SOAP,NFS,TCP,IP and FCP) located above the physical layer of the OSI model. Data access includes shared physical service layers (such as Ethernet) and several protocol tiers, such as SOAP or NFS, that typically generate more latency than specialized physical layers such as FC. Most cloud computing platforms on the market include data access to the Internet, resulting in more data access latency.


  


for storage media, most cloud computing markets use SATA disks in RAID or JBOD configurations. Because the performance of SATA (which is considered a near-line disk for some time) is generally less than that of an enterprise disk (generally referred to as an FC disk), the performance of the storage device is lower than the application requirements.


  


when you adopt relatively low bandwidth and high latency access patterns for low performance storage media, organizations that use the entire storage subsystem cannot support the need for more critical business applications. As a result, this scenario is typically applied only to testing and development.


  


in comparison, the enterprise cloud computing platform needs to provide more choices for different performance tiers. Storage platforms should be able to use this change when performance requirements change, for example, when applications migrate from test to production. The ideal storage for enterprise cloud computing should have multiple performance areas that can be tuned to provide the right level of I/O performance to business performance requirements.


  


Finally, to meet the performance requirements of high-end storage in the enterprise, cloud computing solutions must adopt enterprise-class technologies that are higher or are currently in use. General use of Fcsan. In addition, it is equally important to use technology and technology itself. In a system management environment, virtual machine configurations under enterprise-level requirements must be able to provide continuous performance.


  


element 2: Security


  


security and virtualization are often seen as conflicting. After all, virtualization frees applications from physical hardware and network boundaries. Security, in other words, is to establish boundaries. The enterprise needs to consider the initial architecture of the virtualization design.


  


in most cloud computing markets, both public and private, data security is based on trust, and trust is usually in the management process. When most virtual machines share physical lun,cpu and memory, the management program ensures that the data is corrupted or is accessed by the wrong virtual machine. This is the same as the basic challenges that cluster servers face over the years. Any physical server that may need to take over the process must have access to data/application/operating system permissions. For example, for external backup of the host, the LUN may need to be mapped to a public backup server.


  


in Enterprise cloud computing, there are two ways to protect business data. The first relates to the security of System program management. The main goal is that the system is used as little as possible to prevent any virtual machines from being adversely affected by other virtual machines. Organizations also need to protect LUNs that are accessed by other servers, as well as external backup servers.


  


Other areas to be aware of are data channels. Enterprises must be aware that only the physical servers that need to maintain the necessary functionality are provided with access paths. This can be done using NPIV (Sann Port ID Virtualization) zoning, lunmasking, access list, permission configuration.


  


element 3: Automated ILM storage


  


Information Lifecycle Management (ILM) has been the focus of very effective market behavior and is admired by vendors who sell tiered storage. While ILM is inherently simple-the cost of storage matches the business value of the data, the real challenge is real execution, and many of the so-called ILM solutions are not fine-grained enough to achieve this goal.


  


today, traditional ILM is not deployed to most of the cloud computing platforms on the market. There are two reasons why, first, in many cloud computing, most disk media take the lowest tier of storage in a typical ILM scenario, which makes it impossible to migrate data to the bottom, and ILM cannot be deployed. Second, many organizations do not need to manage data for longer periods of time for cloud computing, because traditional cloud computing is typically used for functional testing/development, proof-of-concept (POC), Web server testing, and more. With many factors in mind, the fine-grained complexity and cost of implementing an ILM strategy do not match the economy-saving cloud computing.


  


according to some industry reports, 70% of the data is static. Businesses can cut costs by storing the right data on the right media. They realize that by deploying cloud platforms to save costs, the economic benefits of implementing ILM in cloud computing are significant. But the premise is not to interrupt the application, and not to add unnecessary operational complexity.


  


To do this, organizations must use a policy-based block-level ILM approach, regardless of access and application type. By tracking the properties of data at the block level, you do not need to perform data archiving or data migration at the operating system level. This approach is also independent of the operating system type and is independent of the access methods used to store the data. It not only optimizes the cost of storage while maintaining performance (all data writes are done at the high-speed level), but also reduces the consumption of electricity by settling the unused blocks of data into the low-speed layer. This is reasonable because near-line storage consumes only about 20% of the energy consumed by the enterprise storage. To do this, to achieve a truly enterprise-class application of automatic hierarchical storage, volume level or file-level data migration is not competent, granularity must be refined to the block level. Only block-level data migrations can be independent of the operating system type, and are independent of the way stored data is accessed, so that support for the application needs to be appropriate.


  


element 4: Storage access mode


  

There are three main ways of
access storage: Based on data blocks (Fcsan or iSCSI), file-based (CIFS/NFS), or Web services. Block-and file-based access is most common in enterprise applications, and better control of performance, availability, and security. At this point, most cloud computing platforms on the market take advantage of Web service interfaces, such as soap and rest (representative state transfer) to access data. Although, this is the most flexible way, but has the effect of performance. Ideally, the enterprise cloud provides all three ways to access storage to support different application architectures.


  


element 5: Availability


  

The Maintenance window for the
it architecture is heavily scaled down, as organizations need to support users in different time zones and ensure 24x7 availability. While service level agreements (SLAs) are generally inextricably linked to availability, they are difficult to measure from a business standpoint because of the overlap of composite SLAs with multiple architectures.


  


mentioned earlier, I/O performance was first considered in most cloud computing platforms on the market. If the cloud platform relies on part of the architecture that is not managed by the internal IT group, then the redundant architecture part and path is the best way to reduce the risk of downtime. While cloud storage service providers continue to increase availability while considering costs, the current service-level agreements in the marketplace do not meet the needs of enterprise-critical applications.


  


In high-end enterprise-class cloud computing, storage systems are sufficient for enterprise-level storage solutions within an enterprise, including multipathing, controllers, different optical networks, RAID technologies, end-to-end architecture control/monitoring, and sophisticated change management processes. In low-end enterprise-class cloud computing, storage availability is comparable to the service level of the cloud computing platform on the market today. To provide the level of service required by the enterprise, Enterprise cloud storage vendors must take advantage of sound architectural design and fully validated innovative technologies.


  


element 6: Master Data Protection


  


master data refers to data that is run online. Master data can be protected by a single technology or combined with a variety of technologies. Some common methods include raid protection, multiple copies, remote replication, snapshots, and continuous data protection.


  


in most cloud computing platforms on the market, the issue of primary data protection is often left to the user. Today, this approach is rarely found in the popular cloud computing platform because of the complexity and cost of technology. There are some popular cloud storage schemes to protect the master data by maintaining multiple copies of the data, and the entire system runs without RAID-protected storage to reduce costs.


  


Enterprise-Class cloud Master Data protection should be based on an internal enterprise-level scenario. Reliable technologies such as snapshots and disaster preparedness should be in place when the business impact Analysis (BIA) of the scenario is needed.


  

The main difference between
internal enterprise scenarios and enterprise cloud storage is how primary data protection is bound in scenarios. To continue the experience of deploying cloud environments as needed, various options must be packaged so that services can be deployed automatically. As a result, a series of bundled options can be packaged to meet a large number of requirements. There may not be a technology that can leverage snapshots, remote replication, and so on to match customer needs. In any case, most users will realize that it is often necessary to sacrifice flexibility to gain other management benefits in the corporate cloud.


  


element 7: Secondary data protection


  


secondary data is derived from a historical copy of the master data to form a backup. This data protection means reducing data corruption, restoring deleted or written coverage data, and keeping the data for long periods of time for business or regulatory needs. Typical scenarios often include backup software and several types of storage media. Duplicate data deletion may be used, but this can create problems in a multiuser environment that is related to data isolation.


  


some commercial and public-domain schemes are added to the mass cloud storage to achieve secondary data protection. But vendors of the VW cloud computing platform rarely pack these with online storage. Although the reasons vary, in many cases the service level (SLA) issues associated with recovery time and retention periods are difficult to handle.


  


regardless of whether the scheme is private or a multiple customer cloud platform, the service level of management tools, visibility and recovery is the key to secondary data protection. Once a recovery request is submitted, the recovery should be started directly and automatically. The user should be able to control the level of predictable recovery performance (GB of the recovered data), and should be able to select the length of retention from a short list of options. Finally, users should be able to check the status of those online backups. Because the frequency and retention periods determine the resources needed to store backups-that is, cost-customers should be able to observe resource usage and charges online, lest they be surprised at the checkout.


  


element 8: Flexible Storage Flexibility


  

The flexibility of
storage is the ability to respond to the needs of the business according to the changes in the storage resources. Ultimately, it depends on the operating system's ability to check for changes in storage and how it is accessed.


  


The 9 elements discussed here, this is the most popular cloud computing platform to do the best thing. Most scenarios have the ability to incrementally increase storage by a predetermined number of designs. Delete Space is also an option, typically used for volume or load points. As noted above, the operating system's ability to respond to storage changes is often limited.


  

The storage needs of
support enterprise cloud requirements can be expanded flexibly and billed in a way that customers understand. Although it is important to add and remove storage space, users tend to pay for only the space they use. They also want to have the ability to adjust and generate usage reports under a web-based management approach. This functionality helps them to control costs and provide intelligence for business planning.


  


element 9: Storing reports


  


When companies consider outsourcing all or part of their IT architectures, the visibility of these specific technologies is often a concern. Customers need to include an understanding of the state of the running environment from a capacity and performance perspective. In order to achieve this goal, through the user management interface, the output of rich storage reports become necessary, so that customers for storage efficient operation of confidence.


  


on the market cloud computing platform, storage-related reports are the most basic tool. Many providers provide standard reports for use, and in some cases they also provide basic performance assessment tools, either from a provider, from a shared vendor, or from a third-party tool.


  


Enterprise Cloud has advantages over traditional enterprise storage, and traditional enterprise storage often leads customers to use a single storage vendor's solution. This makes the report very simple, because the data does not need to be translated, as it is from a multi-vendor platform, to produce a consolidated look-and-feel report. Detailed information about historical and real-time usage, along with some key performance metrics-historical or real-time-should be viewed in real-time through the user management interface 7x24 hours. Ultimately, to reduce corporate concerns about losing control, cloud providers should have more comprehensive and accurate reporting capabilities, especially for storage-system usage.


  


Conclusion


  


A robust enterprise cloud should not focus solely on CPU, memory, disk, and IP address allocation, and should include the 9 elements mentioned in this article when planning an enterprise-class cloud computing platform strategy. In this way, enterprises can have a more complete cloud computing platform to support business operations.


  


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.