First, the preparation
We are going to do a tens MongoDB test, the operating system selected CentOS 5.5 64-bit version, the basic simulation of the actual use of the environment, using a single cluster model (testing the actual effect of multi-CPU cases).
Test benchmark Data:
Server configuration:
Intel Xeon E5506 * 2 Total 8 cores
Memory is 8GB DDR3
Application:
Programming Language: C + +
Compiler: GCC 4.4.5
Boost version: 1.47
MongoDB Version: 2.02
Test Purpose:
To test for delays in large amounts of data insertion and to find the appropriate solution.
Target applications:
No partitioning MMOSLG (target online users 1 million ~ 3 million)
Second, the test effect
Directly test the behavior of inserting data, test insert data 10000000, total time consuming: 7.6 minutes, the average insertion time of 0.0469 seconds.
The final resource takes up a hard drive size of 1.2GB, storage data size 1.5GB, index size 0.3GB
Third, test results
The test found that if you have a large amount of data every day inserted in the entire game world, the time spent on 1 million data is very small, when the number of more than 3 million is a little more time-consuming.
While the test is open to read data at the same time in a test phase, which reads the data simultaneously and writes the data, it is not found that this results in a sharp decrease in performance and a slight slowdown.
Iv. Related Solutions
Later, asynchronously, I used a thread pool to synchronously perform all the insertions and reads, and found that it was significantly more efficient and did not degrade because of the performance of MongoDB itself.
However, the number of threads is not too large, preferably for the current number of CPUs/2, the best, if more than this number will cause the performance of MongoDB itself program degradation.
"Original test" MongoDB Tens Insert data test (MMO online gaming application)