Direct-attached storage: Key to Cloud infrastructure

Source: Internet
Author: User
Keywords Server we this if
Tags access application application cases application environment applications backup backup software based

is direct-attached storage obsolete?

Direct-attached storage is what we often call Das (direct-attached storage), usually the storage media that is installed inside the server or installed in an expansion enclosure directly connected to the server. There must be a fixed binding connection between the DAS storage and the server, so there is no network structure between them, but direct data read and write.

Das has been considered a inefficient structure in the way servers and storage are connected, and it is not convenient for data protection. Direct-attached storage cannot be shared, so it is often the case that there is not enough storage space on one server, while some other servers have a large amount of storage space idle but not available. If storage cannot be shared, there is no balance between capacity allocation and usage requirements.

The data protection process under the DAS structure is relatively complex, and if you do a network backup, each server must be backed up separately, and all the data flows are transmitted over the network. If you do not make a network backup, you need to have a backup software and tape device for each server, so the complexity of the backup process increases dramatically.

Shared storage architectures such as sans (storage-area receptacle) or NAS (network-attached storage) can solve these problems better than direct-attached storage architectures. So we see that the process of Das being eliminated is getting faster. So far, however, Das is still a common pattern of server and storage connectivity. In fact, not only has Das not been eliminated, but there seems to be a resurgence in recent years. This year, as EMC announced the launch of a PCI Express (PCIe) solid-state storage offering (EMC's design is designed to store some server-local data), the DAS recovery process is at an orgasmic stage.

The actual performance of San and NAS is lower than expected

One reason the DAS storage structure remains so far is the poor performance of San and Nas, with a huge gap between expectations and reality. There is a high expectation for San architectures, and it is desirable to create a global pool of storage resources through a SAN, so that capacity can be dynamically allocated to front-end servers and how much is actually allocated. However, we are still far away from this goal in the first 8 years or so of the advent of San architecture technology. To date, SAN storage still has to partition a separate storage partition for each server. When a server needs more storage space, a new partition must be drawn and assigned to the server, and the new partition and existing partitions will be connected to the server side. Unfortunately, the management of new and existing partitions is independent of each other. So in fact, the process of adding storage space to a server in a SAN environment is very similar to the previous DAS approach.

It is also expected that the data protection work under the San architecture will be simpler. The goal of a user is to backup directly from the SAN environment without having to process each server separately. However, at present, only a handful of applications can help users to achieve this precise function, and in most cases, we can only blindly back up the data, but do not know what is backed up data. Users quickly realize that they need a technology called "application Awareness" that can help them back up online applications and perform intelligent recovery operations. This can be provided that users need to install specific backup software on the server side.

Finally, the point is that SAN and NAS products still cost far more than Das. Many users have chosen inefficient direct-attached storage for price considerations rather than efficient shared storage.

Objectively speaking, today's SAN and NAS systems have been able to make up for short boards that are not flexible in early storage allocations using technologies such as thin provisioning. However, they have been consuming too much time to solve storage allocation problems, leaving the DAS with enough time to take a firm footing in the data center. In addition, SAN and NAS are still problematic and cannot be resolved yet.

Now, the main driving force for the development of shared storage architectures such as NAS is the rapid growth of server and desktop virtualization applications, and the back end needs to build a shared storage architecture if the virtual machine image is moved flexibly between physical hosts. In a virtualized environment, a virtual machine is actually a large file, so both application-aware (Application-aware) and off-host backup (off-host backup) are feasible, and the backup process does not need to involve the physical host part. Although the shared storage architecture has many new projects and important application cases, the DAS architecture still exists in data center applications and its value continues to improve.

Starting the system requires Das

One important reason the DAS architecture is still prevalent in the data center environment is that the system requires a local boot disk. Although many SAN environments already have various forms of methods for booting from the SAN, this requires dedicated host adapters (HBAs), and the SAN storage system itself must support this capability. So we see that most physical servers still boot the system from local DAS storage.

In addition, Das thanks to the proliferation of SSD solid-state disk, from the local storage boot system is more than starting from the San system has a clear advantage. First, starting or restarting the system from a local SSD disk takes only a short time (the second level), and the SSD disk is also available for virtual memory swap page space, which is extremely important for virtualized environments. In a virtualized environment, when we load a virtual machine on a host, memory runs out quickly, and then we start using the memory swap page space on the local storage. If local storage uses a common hard disk, the performance of memory swapping can be greatly affected, and if we use solid state storage like flash SSD, the decrease in performance will be negligible. Using SSD as a startup disk can host more virtual machines without having to purchase expensive memory.

Extending Sans with Das

In the DAS market recovery process, solid-state storage also played another important role, that is, as an extension outside the SAN. With the ultra-high performance of solid-state storage based on the PCIe bus, IT system storage architectures are moving toward tiered storage or directly caching the data needed by the server locally. The PCIe SSD can communicate directly with the CPU, unlike traditional SSD, where performance is hampered by SAS or SATA protocols. For systems with limited memory capacity, PCIe SSD is undoubtedly an ideal virtual interchange page space, so the application cases of storage layering and data caching based on this technology are more and more concerned.

With this architecture, the storage system can intelligently PCIe the most active data into the SSD disk. Then, if an application or user requests access to these hotspot data, the storage system can respond to the data access request with the fastest speed from the PCIe SSD. This means that applications and users do not have to wait for their access requests to span the storage network, and then the storage System's controller receives and processes the access requests, waits for the hard disk's head to find the correct track position, and finally returns the required data or write operation confirmation via the original path.

If all goes according to expectations, this design pattern of extending Das on the San front-end will subvert the traditional San world. We see that SAN storage will become a central repository of information, with less activity on the data, while the server's local DAS storage based on the PCIe SSD disk is used to handle the hottest data. As a result, SAN storage will be used for long-term storage or backup of data, while the server is responsible for processing the active process. The effect of this result is that the design of the SAN storage will be more capacity-oriented, while performance becomes less important. However, the current PCIe SSD technology also has a short board can not be used as a system startup disk, so it is necessary to plug a SAS hard disk or an ordinary SSD disk on the server.

DAS, the key to cloud infrastructure

Another key driver for Das Recovery is the design of a large data storage application environment. Including Facebook, Google and other companies involved. The system they designed is characterized by consolidating computing resources and storage resources on a single server, while multiple servers are interconnected through high-speed networks, and servers can access the data directly from local storage. They even use a combination of PCIe SSD and hard drive to do the system boot. These online service providers, as well as Internet technology companies, chose this design because the architecture is extremely cost-effective, and it is easy to extend the system by adding servers under this framework.

In the past, it was thought that the integration system pattern of DAS Storage plus computing was limited, and only companies with a large number of online applications would deploy it. But things are different now. Here, we have to mention server virtualization again, because server virtualization applications need to run in an infrastructure environment where computing power and storage capabilities can be extended at the same time. Some vendors, like Nutanix, can provide a built-in storage server cluster product that can quickly build a cloud-computing infrastructure and is ideal for traditional data centers.

The server virtualization environment still requires shared storage to implement features such as virtual machine migration and virtual machine high availability. Under this shared storage architecture, data can be automatically migrated between nodes within a cluster, that is, the image of a virtual machine can be run on any node in the cluster. This "shared Das" model features simple, cost-effective local storage, while providing many of the advantages of San architectures.

If Das represents the future, will the San die?

Das not only did not disappear, but also developed very well. Many storage industry experts believe that the data center storage environment is heading in the direction of Das Architecture. As described earlier, future SAN storage in the data center is a repository of data that is stored for long periods of time, while truly active data resides on server-local storage. Data migration management software is now mature and can be used to keep active data locally on the server. In addition, such software can perceive local write operations and then synchronize newly written data to the back-end SAN storage space.

Das-based storage architecture is well known to experts there are also two reasons why the virtualized application environment has higher performance requirements, as well as the high-performance performance of SSD solid-state storage. The former is the localization of the application requirements data storage, while the latter utilizes the high performance of local data access to avoid latency problems caused by the storage network.

Make good use of the combination scheme

As in the past, storage administrators still have a lot of options to address when it comes to storage application challenges. But first you need to have a profiling tool that can help us adjust our current environment. Before deciding how to choose the next step, it is important to try to do the preparatory work and make the right decisions.

If you can't upgrade your network or storage facilities because of budget or time constraints, we can take a shortcut to building a strategy for SAN storage to mix with SSD das storage. By eliminating the bottleneck of storage network, this scheme can maximize the advantages of SSD, and gain performance improvement overall.

If the budget is not a problem, then we can increase investment in the storage network and shared storage systems, such as in the future to reduce performance concerns, users can choose a solid-state storage device. Of course, in the back-end storage System optimization, the use of DAS-structured SSD to do the boot disk and memory swap partition, the optimization of the front-end structure is also very important, so as to form a complete set of high-performance storage solutions.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.