SSD provides incredible performance for the storage system, which is 10 times higher than the fastest mechanical hard disk. However, because SSD write performance is affected by block erasure latency, SSD is usually used as a read device. However, this does not mean that SSD does not improve the write performance. In fact, it will improve the write performance, But SSD write performance is only half of the read performance. In addition, the Write Performance of SSD is gradually decreasing, which is also a performance problem.
SSD technology limits the lifecycle of write operations (read operations do not affect the lifecycle and reliability of SSD products ). The SSD chip uses "units" to record data bits. When these units are written, erased, or re-written, these operations will gradually reduce the performance of the units. After a certain number of program cycles/erasure cycles, a unit will be damaged and cannot be used again. The Controller records these corrupt units to prevent these bad blocks from being used again, as they do for traditional hard drives. After the unit is damaged, the SSD capacity will decrease until it must be replaced. IT managers have invested a lot in SSD, so they hope to adopt some methods to ensure the maximum service life of SSD. SSD manufacturers will provide warranty services. Some manufacturers may even provide an additional 20% of storage capacity to offset the impact of damaged units.
SSD has three main technologies in manufacturing: multi-layer unit (MLS), enhanced multi-layer unit (eMLC), and single-layer unit (SLC ). MLC is a product for individual consumers, because its manufacturing cost and price are the lowest, and its service life is also the shortest. The write operation on the MLC chip is-times. Compared with MLC, enterprise-level SSD usually uses SLC technology. SLC can support 100000 write operations. Another option is eMLC, which also draws on the advantages of MLC and SLC. It supports 20-write operations, but the price is lower than SLC. All in all, you can get as much as you are willing to spend.
Although manufacturers try to mitigate these problems by writing cache and sequential writing, if it managers can intelligently deploy SSD, they will be able to maximize the service life of SSD. The following are some good methods:
Step 2: Understand the data features of applications.
Most organizations do not understand the characteristics of data used by their applications. A common method is to deploy the most expensive HDD hard disk in a simple way and provide it to applications in excess of capacity. This is simple and can work well, but it causes low capacity management efficiency and unnecessary expenses. Most storage vendors provide performance monitoring tools to understand the actual I/O usage. System-level performance features may well divide the proportion of SSD storage to meet performance requirements, but they cannot provide any useful information for the service life of SSD devices. Service life is an important part of the total cost of ownership (TCO) of SSD. Experiment to understand the application performance requirements is the only way to obtain sufficient performance with the lowest TCO.
Step 2: classify applications by read I/O density.
After understanding the I/O feature requirements for a specified application, the next step is to associate the read performance of read-intensive applications with SSD. Although these numbers seem boring, It is very scientific to use them to make decisions. Only a few applications are read-only, and SSD is undoubtedly their best choice. Most applications have read/write requirements, and the read/write ratio is our standard for classification. Applications with a high proportion of read operations can benefit from SSD, with few side effects.
But the problem is that not all read I/o tasks are equal (in fact, when we talk about read operations, the input (O) is more than the input (I ). I/O is divided into random I/O, sequential I/O, and recursive I/O. Performance monitoring tools cannot tell you their category. Random I/O is not a problem when SSD can accommodate all data. Sequence I/O won't benefit from SSD unless all data is stored in SSD. In addition, sequential I/O applications are usually batch processed and only occasionally run. If these applications do not have a large number of I/O bursts and do not have very strict time requirements, it is not cost-effective to store them in SSD. Deploying SSD for Recursive read I/O applications is a golden principle, and people do not need to understand it too much before making this decision.
Step 2: reasonably allocate SSD storage.
With the above data and predictions, storage managers need to maximize the use of expensive and limited resources. They need to precisely provide SSD resources with the best performance. Key applications do not necessarily have high performance requirements. Therefore, it is a misuse of resources to provide the most expensive storage because they are very important.
Step 2: do not consider the cost more important than the business value.
Because the IT department is responsible for the TCO of storage, IT is easy to make a decision on whether to deploy the service only by cost. However, if an application requires SSD-level performance, it should be deployed even if it does not comply with the golden standard of SSD Service Life. Considering business value as a part of TCO will quickly change its computing method.