In such a business scenario, Data Writing is backflow, and usually depends on user_id queries. The data is about 0.5 billion, so I did the following test:
Performance testing machine environment:
View code
Uptime | 364 days, 4:02, 2 users, load average: 0.43, 0.19, 0.06 Platform | Linux Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga) Kernel | 2.6.18-164.el5Architecture | CPU = 64-bit, OS = 64-bit Threading | NPTL 2.5 Compiler | GNU CC version 4.1.2 20080704 (Red Hat 4.1.2-44). SELinux | Disabled Virtualized | No virtualization detected# Processor ################################################## Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes Speeds | 16x2261.063 Models | 16xIntel(R) Xeon(R) CPU E5520 @ 2.27GHz Caches | 16x8192 KB# Memory ##################################################### Total | 23.53G Free | 106.74M Used | physical = 23.43G, swap = 5.02M, virtual = 23.43G Buffers | 165.41M Caches | 4.78G Dirty | 572 kB UsedRSS | 18.3G Swappiness | vm.swappiness = 60 DirtyPolicy | vm.dirty_ratio = 40, vm.dirty_background_ratio = 10
MySQL version 5.1.48
0.25 billion data tests:
Data preparation:
./sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=250000000 --mysql-user=lingluo --mysql-password=lingluo --mysql-socket=/u01/mysql/run/mysql.sock prepare
Run:
View code
$./sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=250000000 --oltp-read-only --mysql-user=root --mysql-socket=/u01/mysql/run/mysql.sock runsysbench 0.4.12: multi-threaded system evaluation benchmarkNo DB drivers specified, using mysqlRunning the test with following options:Number of threads: 1Doing OLTP test.Running mixed OLTP testDoing read-only testUsing Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)Using "BEGIN" for starting transactionsUsing auto_inc on the id columnMaximum number of requests for OLTP test is limited to 10000Threads started!Done.OLTP test statistics: queries performed: read: 140000 write: 0 other: 20000 total: 160000 transactions: 10000 (61.46 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 140000 (860.42 per sec.) other operations: 20000 (122.92 per sec.)Test execution summary: total time: 162.7105s total number of events: 10000 total time taken by event execution: 162.6381 per-request statistics: min: 2.86ms avg: 16.26ms max: 280.82ms approx. 95 percentile: 33.86msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 162.6381/0.00
Data cleanup:
$./sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=250000000 --mysql-user=root --mysql-socket=/u01/mysql/run/mysql.sock cleanupsysbench 0.4.12: multi-threaded system evaluation benchmarkNo DB drivers specified, using mysqlDropping table 'sbtest'...Done.
0.5 billion data tests:
Data preparation:
./sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=500000000 --mysql-user=lingluo --mysql-password=lingluo --mysql-socket=/u01/mysql/run/mysql.sock prepare
Run:
View code
$./sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=500000000 --oltp-read-only --mysql-user=root --mysql-socket=/u01/mysql/run/mysql.sock runsysbench 0.4.12: multi-threaded system evaluation benchmarkNo DB drivers specified, using mysqlRunning the test with following options:Number of threads: 1Doing OLTP test.Running mixed OLTP testDoing read-only testUsing Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)Using "BEGIN" for starting transactionsUsing auto_inc on the id columnMaximum number of requests for OLTP test is limited to 10000Threads started!Done.OLTP test statistics: queries performed: read: 140000 write: 0 other: 20000 total: 160000 transactions: 10000 (14.27 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 140000 (199.83 per sec.) other operations: 20000 (28.55 per sec.)Test execution summary: total time: 700.6116s total number of events: 10000 total time taken by event execution: 700.5176 per-request statistics: min: 12.31ms avg: 70.05ms max: 445.24ms approx. 95 percentile: 110.52msThreads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 700.5176/0.00
Clear Data
Conclusion:
0.5 billion of the data is still in the range of 95% ms, a little slow, 0.2 billion of the data, 95% of the query time range is 34 ms, the speed is acceptable. In this scenario, the data size of a single table is 0.2 billion.