So far, I have been running the master Informix development database server on SSD for more than a year. My development server is equipped with 12 CPU cores, 24 gb ram, 5 TB disk space, and a 5 gb ssd. I used this server to develop the Informix Data Warehouse project and run
So far, I have been running the master Informix development database server on SSD for more than a year. My development server is equipped with 12 CPU cores, 24 gb ram, 5 TB disk space, and a 512 gb ssd. I used this server to develop the Informix Data Warehouse project and run
So far, I have been successfully running the master Informix development on SSD for more than a year. My development is equipped with 12 CPU cores, 24 gb ram, 5 TB disk space, and a 512 gb ssd. I used this server to develop the Informix Data Warehouse project and conducted a benchmark test on the OLTP version. After purchasing this server last year, I started to try SSD. I migrated the primary Informix space from an image disk drive to an SSD. I tested both the Informix servers. One is version 11.50 and the other is version 11.70. Both versions increase the speed when running on SSD. I used a link to the database space, so I can copy the space back and forth between the regular disk drive and SSD, reset the link, and start the server to test performance. SSD performance is much better. I don't want to use the old disk drive any more. SSD is very stable, so I recommend SSD for key production systems.
For many years, disk drives have been a bottleneck in database performance. The disk space is getting bigger and bigger, and people are constantly storing more and more data, but the speed of the disk drive is not improved. As the volume of data in the disk increases, throughput becomes a serious problem. In typical cases, the best way to accelerate disk I/O is to distribute data to multiple disk drives and store less databases on each disk. SSD has changed everything. I think the disk drive will fall down like a tape drive in the future. Everyone will run the program on the SSD, and the disk drive will only be used as the backup media. In the series of laptops, desktops, and servers, SSD is becoming more and more frequent.
A hybrid disk system is an interesting invention that combines SSD with a traditional hard disk. The series provided by some vendors use SSD as a smart cache for a batch of traditional disk drives. The idea is that all data pages are stored in traditional disk drives, while the most common data pages are cached in SSD. One of my clients achieved great success in using this series, achieving significant performance improvements. Other hybrid trends include pairing SSD directly with a large disk drive and caching them as disks. Both Seagate and OCZ have launched hybrid disk drives or disk cards. Although the hybrid method can improve the performance, it is not as fast as pure SSD. I think this is only a temporary measure. In the end, the cost of SSD will be reduced and the cost-effectiveness will be improved. As the SSD production increases and the cost decreases, the hybrid method is no longer necessary.
Set Benchmark Test
In order to write this article, I decided to start from scratch and perform benchmarking to measure the performance improvements that SSD can provide. I want to perform a stress test to measure the performance of sequential reading, batch sequential reading, and random reading and writing. I have re-installed the latest version of Informix 11.70 FC4, and created my Informix Environment from the beginning with a brand new installation and script. The same ONCONFIG file is used for each test. I switch between a traditional disk drive and an SSD by changing the link of the Informix disk space. The traditional disk drive I use is a common high-speed image pair in most production systems today. The result is displayed in the table in Figure 1.
Benchmark Test |
Image disk device |
SSD disk device |
Percentage increase |
1. Configure Dbspace and logs |
47 minutes 24.986 seconds |
30 minutes 38.419 seconds |
64.60 |
2. Import the database |
82 minutes 18.984 seconds |
67 minutes 40.300 seconds |
82.20 |
3. Benchmark Test 2: batchcompute jobs |
3 minutes 21.258 seconds |
2 minutes 35.227 seconds |
77.11 |
4. Benchmark Test 4: OLTP with 1,000 users |
5,272 tpmC |
116,849 tpmC |
2,216.41 |
5. Data Warehouse query Benchmark Test |
4 hours, 19 minutes |
3 hours, 14 minutes |
74.90 |
Average overall increase |
|
|
503.05 |
Figure 1 Comparison of the benchmark test results of SSD and disk drive
The following describes the testing tasks of each benchmark.
1. Configure Dbspace and log first test measurement the time it takes to create all dbspace and all logical logs, move physical logs to an independent dbspace, and then recreate the benchmark test environment. Set Informix to execute direct I/O and use mature files in both tests. In the past, I copied dbspace directly between SSD and traditional disk drives. This test re-built the entire environment in two systems.
2. The next task of importing a database is to execute the dbimport of each database used in each configuration. This task performs a stress test on the Write Performance of the drive.
3. benchmark Test 2: batchcompute conducted this benchmark in the Fastest DBA competition of IIUG Informix User Conference on April 9, 2009. In the past, I introduced this benchmark in my article. The objective of the test is disk I/O, so I have not optimized the SQL to reduce the number of reads and writes.
4. Benchmark Test 4: OLTP with 1,000 users this is a benchmark test used in the Fastest Informix DBA competition of 2011 IIUG Informix User Conference. Advanced DataTools sponsors the competition to find and reward the fastest Informix DBA. This competition is one of the most interesting events in the past three Informix sessions. The test uses the Open Source BenchmarkSQL Java program to generate 1,000 sessions and insert, update, and delete an Informix database. Open Source BenchmarkSQL is a JDBC benchmark that is very close to the OLTP standard of the TPC-C. In the competition, contestants need to process the Informix server that runs 1,000 OLTP users, optimize the server within one hour, and then select who can achieve the highest transaction volume per minute. This is a random I/O test for reading and writing operations, so the random I/O function of the disk drive is tested. SSD has the best performance in this test. The transaction volume per minute is 2,216% of that of a traditional disk drive.
5. The data warehouse query benchmark test is a new benchmark developed by me to test the data warehouse performance. The SSD capacity is up to 512 GB, so I used a small database and ran 18 complex data warehouse queries. Most queries require sequential scanning of a fact table to obtain relevant results. Therefore, you can perform stress testing on read access to the drive through this operation.
Key considerations
During the migration of Informix dbspace to SSD, you also need to consider other aspects that may have disk I/O bottlenecks. In the data warehouse benchmark test, I did not achieve outstanding performance on schedule when I first migrated to SSD. The other two factors affected by disk I/O performance are the temporary sorting space and output space of the query report. After migrating the reported temporary space and output space to SSD, the performance benefits are very significant. In fact, migrating Informix temp dbspace to SSD may be the fastest way to use this new technology, while ensuring that the entire server does not need to be reconfigured.
I have discussed SSD technology with other administrators managing large image processing systems (such as video production. These systems need to access large image files in near real time. These administrators have successfully built a RAID-10 series containing 6 to 8 SSDS. According to their reports, data mirroring and striping of RAID-10 can improve security and SSD performance. I very much hope to use the Informix database to experience this improvement.
One of the concerns about SSD is its reliability as a new technology. For me, I have been using an SSD for a whole year, and I have never encountered any problems. In the same period of time, all three of my general hard disks failed. Any media will inevitably fail, so it is best to always plan a backup solution. All disk drives must be backed up regularly. If I use SSD to design a new production-level Informix database server, I will create an Informix High Availability remote secondary server (RSS) on a regular disk drive as a backup system. This is the best practice for any production database server.
Conclusion
In all databases, SSD can complete Informix database tasks faster. SSD is always far ahead in sequential scanning, random I/O, and reading and writing. SSD greatly improves the performance of the OLTP random disk I/O benchmark test. The number in Figure 1 proves everything. Simply put, the most convenient way to improve the speed of the Informix database server is to migrate dbspaces to SSD. Informix runs well on SSD.