Of all the recent concerns about cloud computing, storage is more viewed as an underlying platform. Today, many cloud computing offers only a collection of CPU cores, quantitative memory allocations, low speed storage, or some Internet-facing IP technology. Recently, there have been interesting advanced technologies related to cloud computing and storage, especially the use of Web services access, which makes access storage no longer restricted to device files or NFS mount points.
The "Enterprise-class features" of typical data storage and management are constantly being pushed into new IT architecture innovations. Storage architects are aware that these features are important for critical business and production applications, but the current cloud computing lacks these features. The goal of this white paper is to describe the 9 essential elements of storage in enterprise cloud computing.
Factor 1: Performance
Performance requires cost. In a well structured application, performance and cost are in balance. The key to this is the use of appropriate technology to match the performance of business applications, first of all, the enterprise's business language into it mode. Because of the difficulty of this transformation, businesses are often stuck in static it architectures and unable to cope with changing business performance requirements. Enterprise Cloud Computing provides a platform that is more responsive to changing performance requirements.
In the early cloud computing platform, storage I/O generally has a high latency. This is because vendors note that data on cloud computing is easier to access, but do not notice increased service levels associated with performance, bandwidth, and IOPS. There are two reasons for higher latency: the mode and type of access, and the configuration of the storage distribution.
Access patterns include a combination of multi-tier protocols (such as SOAP, NFS, TCP, IP, and FCP) that lie above the physical layer of the OSI model. Data access includes shared physical service layers (such as Ethernet) and several protocol tiers, such as SOAP or NFS, that typically generate more latency than specialized physical layers such as FC. Most cloud computing platforms on the market include data access to the Internet, resulting in more data access latency.
For storage media, most cloud computing markets use SATA disks in a RAID or JBOD configuration. Because the performance of SATA (which is considered a near-line disk for some time) is generally less than that of an enterprise disk (generally referred to as an FC disk), the performance of the storage device is lower than the application requirements.
When you adopt a relatively low bandwidth and high latency access mode for low performance storage media, organizations that use the entire storage subsystem cannot support the need for more critical business applications. As a result, this scenario is typically applied only to testing and development.
In contrast, the enterprise cloud computing platform needs to provide more choices for different performance tiers. Storage platforms should be able to use this change when performance requirements change, for example, when applications migrate from test to production. The ideal storage for enterprise cloud computing should have multiple performance areas that can be tuned to provide the right level of I/O performance to business performance requirements.
Finally, to meet the performance requirements of high-end storage in the enterprise, cloud computing solutions must adopt enterprise-class technologies that are higher or are currently in use. FC Sans are generally used. In addition, it is equally important to use technology and technology itself. In a system management environment, virtual machine configurations under enterprise-level requirements must be able to provide continuous performance.
Element 2: Security security and virtualization are often seen as conflicting. After all, virtualization frees applications from physical hardware and network boundaries. Security, in other words, is to establish boundaries. The enterprise needs to consider the initial architecture of the virtualization design.
In most cloud computing markets, data security is based on trust, both public and private, and this trust is usually in the management process. When most virtual machines share physical lun,cpu and memory, the management program ensures that the data is corrupted or is accessed by the wrong virtual machine. This is the same as the basic challenges that cluster servers face over the years. Any physical server that may need to take over the process must have access to data/application/operating system permissions. For example, for external backup of the host, the LUN may need to be mapped to a public backup server.
In Enterprise cloud computing, there are two ways to protect business data. The first relates to the security of System program management. The main goal is that the system is used as little as possible to prevent any virtual machines from being adversely affected by other virtual machines. Organizations also need to protect LUNs that are accessed by other servers, as well as external backup servers.
Other areas to be aware of are data channels. Enterprises must be aware that only the physical servers that need to maintain the necessary functionality are provided with access paths. This can be done using NPIV (SAN N Port ID Virtualization) zoning, LUN masking, access list, permission configuration.
Factor 3: Automated ILM storage
Information Lifecycle Management (ILM) has been the focus of very effective market behavior and is highly prized by vendors who sell tiered storage. While ILM is inherently simple-the cost of storage matches the business value of the data, the real challenge is real execution, and many of the so-called ILM solutions are not fine-grained enough to achieve this goal.
Today, traditional ILM is not deployed to most of the cloud computing platforms on the market. There are two reasons why, first, in many cloud computing, most disk media take the lowest tier of storage in a typical ILM scenario, which makes it impossible to migrate data to the bottom, and ILM cannot be deployed. Second, many organizations do not need to manage data for longer periods of time for cloud computing, because traditional cloud computing is typically used for functional testing/development, proof-of-concept (POC), Web server testing, and more. With many factors in mind, the fine-grained complexity and cost of implementing an ILM strategy do not match the economy-saving cloud computing.
According to some industry reports, 70% of the data is static. Businesses can cut costs by storing the right data on the right media. They realize that by deploying cloud platforms to save costs, the economic benefits of implementing ILM in cloud computing are significant. But the premise is not to interrupt the application, and not to add unnecessary operational complexity.
To do this, organizations must use a policy-based block-level ILM approach, regardless of access and application type. By tracking the properties of data at the block level, you do not need to perform data archiving or data migration at the operating system level. This approach is also independent of the operating system type and is independent of the access methods used to store the data. It not only optimizes the cost of storage while maintaining performance (all data writes are done at the high-speed level), but also reduces the consumption of electricity by settling the unused blocks of data into the low-speed layer. This is reasonable because near-line storage consumes only about 20% of the energy consumed by the enterprise storage. To do this, to achieve a truly enterprise-class application of automatic hierarchical storage, volume level or file-level data migration is not competent, granularity must be refined to the block level. Only block-level data migrations can be independent of the operating system type, and are independent of the way stored data is accessed, so that support for the application needs to be appropriate.
Element 4: Storage access mode
There are three main ways to access storage: Based on data blocks (FC sans or iSCSI), file-based (CIFS/NFS), or Web services. Block-and file-based access is most common in enterprise applications, and better control of performance, availability, and security. At this point, most cloud computing platforms on the market take advantage of Web service interfaces, such as soap and rest (representative state transfer) to access data. Although, this is the most flexible way, but has the effect of performance. Ideally, the enterprise cloud provides all three ways to access storage to support different application architectures.
12 Next