Three ways to resolve SSD architectures

Source: Internet
Author: User
Keywords Aliyun Amazon data center Intel Cloud security supercomputer data center cloud security
Tags access access speed aliyun analysis application array arrays based

As you will learn from this article, many users cannot today sort out the types of solid-state disks (SSD) on the market. This analysis of SSD architecture introduces the three main ways to deploy SSD: Inside the array, within the server, and using SSD devices. Each has its pros and cons, including latency issues and performance levels.





few new technologies can simultaneously improve performance while reducing costs. However, the very interesting technology of SSD achieves this goal. Most of the major storage vendors are beginning to provide a full range of SSD products. Solid-state storage drives can be deployed in three forms: array-based SSD is generally considered for deployment in a storage area network; server-based SSDs are typically deployed on the front end of a storage area network, while SSD devices can be deployed anywhere in both. Choosing the best deployment option also determines the type of defect and the kind of problem that can be solved. Understanding these nuances avoids transition inputs and the resulting overhead.





because of this technical obsession, or because of the lack of knowledge of other solutions, IT managers may be inclined to choose the simplest way to deploy, This is the choice of array-based SSD. In many cases array-based SSDs are the best solution, but if you don't understand the details of the host-based SSD and SSD devices, you may miss out on the best solution for your existing environment.





the key factor in determining SSD architecture deployment is to identify bottlenecks and delays that affect application performance. For any SSD technology, the actual data access speed is close to the level of memory. Although I/O throughput varies greatly depending on the device, this is only due to the device's design, not the location of the device in the storage area network. SSD latency is typically computed in nanoseconds, while network and hard drive device latency is measured in milliseconds, so where you deploy SSD locations is key to performance optimization.





below is a summary of the three types of SSD deployments:





array-based SSD Deployment approach





Array-based SSD is usually deployed in a separate logical layer within the array, called the No. 0 layer. Because inside the array, it is connected directly to the storage backplane. Data migration between tiers depends on the delay of the hard drive, drive throughput, and backplane latency. Among these, the most notable is the I/O throughput of the hard drive. There are a number of factors that determine the final I/O throughput, but in this experiment we are not concerned with the terminology but the end result as a delay. In the vast majority of enterprise arrays, the backplane itself is not a constraint on data access latency, as most of the vendors ' architectures are working to meet the drive's performance requirements.





Automated Storage tiering (AST) software applies sophisticated algorithms to determine when data is active and moves it from low-level hierarchies to SSDs. This data migration can cause delays for all hard drives, but this is one-off. After that, frequently scheduled data is read from SSD, with latency at nanosecond levels.





However, even if the media read latency in the back-end of the San architecture is reduced to a nanosecond level, it is difficult to implement microsecond-level latency over a SAN or WAN network. There are many factors affecting the whole system delay, but for the reading operation, the network is the most important bottleneck at present. Roughly, at present, only about half of the millisecond-level delay problem is solved.




The best application of
array-based SSD is probably the usual performance improvement. However, automated tiering software largely determines data based on I/O activity and cannot be optimized based on feature applications. Therefore, this approach can improve the overall data access speed by simply deploying and managing.





-server-based SSD deployment approach





-server-based SSD deployments are becoming increasingly popular. This approach is typically deployed with the server using a PCI Express (PCIe) card. Server vendors and storage vendors now provide server-based SSD. In principle, this is comparable to the notion that a processor can use a large number of caches directly, but it is provided in a way and managed more like storage.





Data migration to server-based SSDs is not computationally more complex than other SSD deployments. The data is invoked on the SSD based on the access mode or its location. If the data is from a SAN network device, its first read time is limited to the SAN environment and hard drive latency. As before, this is a one-time time overhead. After that, the data is fetched directly on the server and no longer needs to be done through the SAN network. So the millisecond level problem is completely eliminated.




The best case scenario for
deploying SSDs on the front-end of a SAN environment is the large amount of static data used for long-distance calls. One example of such data is the database index or the entire database itself. This type of deployment reduces data access latency by up to 90%. Although some automated storage tiering software can migrate data from the array to the PCIe SSD, frequent invocation of data between the tiers is likely to cause severe millisecond-level latency. In these cases, the array or device solution may be more appropriate.





SSD Device Deployment mode




A
SSD device is an SSD array with an expansion cabinet. The primary advantage of this device is that it can be deployed randomly on the server or array side, depending on the latency position. Deployment around the server can be used for device startup based on a network environment to a large extent to address concurrent startup issues. SSD devices are also ideal for file services in clusters or virtual environments. Deploying devices near the server side can eliminate most network latency. However, there may still be network latency, which can be very small due to the proximity of the location. However, the data must be accessed from traditional arrays, and there will still be millisecond-level latency on the SAN environment and on the hard drive side.




The second way to use
SSD devices is to deploy at the other end of the San network, near the traditional array end. This deployment can be used as a consolidated SSD layer as a virtual storage. Unlike a single SSD deployed in each array, SSD devices can act as the No. 0 tier in the entire array cluster. This improves the performance of the entire virtual storage, where logical volumes (LUNs) are dispersed across different physical arrays, and data can migrate dynamically across systems. Therefore, the management of backend data does not affect data access performance on layer No. 0.





The third type of appliance is used in a hybrid cloud deployment in a data center. Due to the distance relationship of metropolitan area Network and the feature of hard disk delay, the latency of accessing data from cloud data center is very high. Therefore, high performance and high latency hard drives are often used in cloud deployments to minimize costs. By applying SSD devices in the data center, frequently called data can be placed around the user, a much smaller latency than all of the data on the cloud vendor. The latency of the system occurs every time the data is required to be accessed from the cloud array, but this deployment can still significantly improve overall performance.




The fourth type of
is deployed to improve the overall throughput of old storage. But adding SSDs to the old array is not a cost-effective way to do it. By deploying SSDs on the front end of the old array, organizations can significantly improve data access and prolong the lifetime of existing devices. The cost of this approach is far lower than for all new devices.





as a whole, today's large number of SSD solutions in the marketplace make it possible to optimize I/O performance, which is best suited to the needs of the application without transition inputs. The deployment characteristics of different products will vary, but most vendors will provide the best application guidance. You can start with deferred analysis to help you deploy an SSD storage architecture on demand.

(Responsible editor: The good of the Legacy)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.