With the innovation of storage technology, solid-state hard disk (SSD) plays an increasingly important role in enterprise application. The advantages of SSD performance are obvious compared with traditional hard disk, whether it is read-write or random access speed. But traditional hard drives are mature products that have existed for decades as mainstream storage media and have been supported by a series of proven technologies behind them. So we have to ask, is it really necessary to switch to a solid-state hard disk storage device for a SQL Server database? Considering the expensive cost problem, what is the ROI that SSD can bring to the enterprise?
SSD has many attractive features compared with traditional hard drives, which makes them more competitive. SSD has less energy consumption, faster read mode with random access, and standard hard disk interface standards (such as SATA). It can be said that the emergence of SSD makes the traditional hard drive almost to the end of fate. From the DBA's point of view, SSD high-speed read speed is its biggest advantage, because it is to solve the I/O bottleneck problem plays a crucial role.
But on the other hand, SSD is not perfect, especially for SQL Server databases, and the following reasons tend to discourage DBAs. The first thing to think about is cost issues, and can they bring a good value for the data throughput? An Enterprise Manager is concerned not only with pure performance issues when dealing with storage systems with multiple hard drives, but also with the question of how much each extra penny performance can improve. If you use a cheap common hard drive and can solve the bandwidth problem, the performance will not be poor to where, why do you have to choose SSDs? When using SSDs, you may have to spend 10 times times the money to buy, then you need to ask yourself, performance can also improve 10 times times? Of course, the answer is often not, then I also It is recommended that you use a regular hard drive.
In a paper published in 2009 by Microsoft Research agency called "Server Storage for SSD: Analysis and Tradeoff", analysts believe that solid-state drives are not the best choice for any test server scenario in the near term. "Only a 3-3000-fold increase in SSDs will make it possible to really become a substitute for a traditional hard drive," the analyst wrote. "The value of SSDs as an intermediate cache layer is very limited, and less than 10% of the tests for various workloads show that they are rewarded with SSDs." "The SQL Server database is not within this 10% range, and we have SSD testing for a 5,000-user Microsoft Exchange Server (using embedded databases), and the results prove unsuitable for investing in solid-state drives.
Another problem for SSDs is its reliability, that is, whether it can be used repeatedly: can the flash microcomputer be able to withstand repeated write operations? The same discussion in the launch of the U Disk market also appeared, but we are now concerned about the enterprise-class applications, and personal consumer electronics is not the same, enterprise applications i/ o totals are much larger than individual consumers, especially with regard to I/O-intensive applications such as databases. The importance of data for an enterprise needs no further elaboration, and stability is the most important. So no one wants the price of new technology to be tying the data to a time bomb.
Of course, if we compare the reality with the theory can be found in fact, SSD long-term use problem is not so serious, and a good design can be to a large extent to alleviate the problem. Zsolt Kerekes, an SSD market analyst, personally studied the problem and came to the conclusion that in a well-designed flash SSD, you might need to fill the entire hard disk to see the problem. So even applications such as databases, which contain a lot of write operations, do not pose a threat to use for SSDs.
In view of the above, the long-term use of write operations does not cause too much trouble, it will certainly appear in the service life of new SSD technology, when newer, faster, more capacity, more energy-efficient models will occupy the market.
Of course, the current SSD market is also constantly emerging, although prices in a short period of time will not reduce too much, so if you want to spend tens of millions of dollars in the database system to buy solid-state drives, it would be better to spend the same amount of money to buy other database hardware. such as increasing the memory to reduce the I/O in the load, in contrast, this is more cost-effective than the purchase of solid-state drives. If there is too much I/O in the real-world scenario, then it is not too late to decide to buy SSDs.
James Hamilton, an engineer from Microsoft, published a series of formulas that could help users calculate the cost of buying an SSD to determine the return on investment in the replacement storage device. In this formula (reference link), it uses a database server as a test case, and in his discovery we can see that random I/O to and fro between disks is the main reason behind other I/O, so he decided to replace the original storage device with SSD. But still, as before, the return on investment has become the biggest problem, and using his formula we can see that the scenario he is using is not suitable for the replacement of SSDs.
Although SSD has developed rapidly and is a great replacement for traditional hard drives, the high cost of investment in enterprise applications, especially in the database environment (such as SQL Server), is still not negligible. So it can become a qualified substitute only if the workload or SSD price is lowered. When you're spending a lot of money, it's better to use a formula to calculate that money might be used elsewhere.