Initial key value comparison, MongoDB
Faster and quicker, and this has always been our goal of giving the database system the MySQL dragster the speed of the disk as its biggest obstacle, can this really make sense? Let's just make an obstacle, what about the solution? If an obstacle limits your dragster, you can choose to bypass it faster or improve on the computer. For example:
avoids using disks, as much as possible with memory instead of faster disks (such as SSD)
This is not a good analogy, because the limits from disk are so large and surprisingly never improved. You might say, don't we have SSDs? Yes, it does make the hard drive go up, but don't forget: the CPU and RAM are rising faster than the hard drive! But let's assume that our memory is large to directly replace the hard drive, so everything runs as fast as the light? Obviously not, so don't expose the hard drive is your biggest limit ugly face!
As the CPU core is lifting faster and faster, one day suddenly no longer as quickly as before. In order to solve this problem, multi-core technology was born. However, the problem of limiting the performance of new CPUs ensued and became the most vexing problem! such as thread Mutex! Also like MySQL in the query cache mutex!
In the end, it's time to start testing the benchmark that was developed in May (English literature). Here's why it's been so long since it took a lot of time to load data into MySQL. In the process, I created an open source project to export data from JSON and lead it into MySQL. When this work is done, I have the data sorted by the real world rules. Here, you also have to delete some columns so MySQL can process the data because MySQL cluster can only store fixed-length data on disk. This gives me a lot of work:
a lot of raw materials to write to disk UTF-8 encoding means more than 3 times times the data to be written to
This ensures a good operation of the MySQL cluster, but there are special situations depending on the type of value. If the value type is text or class, then we must also use varchar or a similar format, which really limits the MySQL Cluster. To make MySQL work better, you can create simple tables:
In this form, about 105 million rows of data are loaded. This should be a piece of cake for MySQL cluster, right? But besides MySQL cluster only supports 512MB hash data per section (really stupid limit). Helpless can only be divided into 5 parts of the data, this part of the work is completed.
Have to say, no disk data, MySQL cluster operating a lot of stability. Occasional data loss and other oddities did not occur when loading varchar format data tables. Therefore, not only is the data on the disk limited to you, your data type (VARCHAR) seems to need further refinement.
To point out, my server (8 core AMD CPU and 16GB RAM) is ready. The MySQL, MySQL cluster, and MongoDB with the InnoDB storage engine will be tested. The test project is to read 1 million rows of data distributed on 100 threads in the same case 10 times. To be fair, make sure that the data I need to install into memory has been put on memory, so try running it two times first. In NDB case, the MySQL API will be used (Ndbapi will be tested at the end). The results are as follows:
MongoDB 110000 rows read/Secondmysql with InnoDB 30000 rows read/Secondmysql with NDB 32000 rows read/second
In the case of NDB, make the following settings:
It is clear to tell you that there is a huge difference in this pattern. Loading normal data, the results are similar. But when the JSON (JSON is the MongoDB native file form) is loaded, the expected event occurs, MongoDB is 2.5 times times faster than Ndb/innodb, and Ndb/innodb is equal.
Summary:
In times of more and more cheap ram, please remove the damn 512M setting!
Correction and addition of key value contrast, MongoDB still wins
First, the test environment is identical to the above, followed by a single table, and finally in MySQL, using the InnoDB and NDB two processing engines respectively. Tests the reading of 1 million rows of data (total table size 105 million). It is also 10 times distributed on 100 threads, and a total of 10 million rows of data are read in.
After some checks found that the InnoDB engine is not fully cached, the correct test results are as follows:
MongoDB110000 rows read per secondinnodb 39000 rows read/secondndb 32000 rows read per second
In this confrontation, MongoDB is still in absolute advantage, and InnoDB is obviously faster than NDB.
Key value comparisons for specific environments, MySQL dawning
MySQL's maturity is far from MongoDB, and when you put MongoDB on your hard drive, you'll find that it's going to be a recession. If we have enough memory (we put it on Amazon, where there is enough memory to use), does it mean that it will perform well without generating any disk I/O?
Choose a MongoDB data store with 105 million rows of data. Originally I intended to use all the MongoDB data stores, but I had to exclude data like the VARCHAR format, and by NDB putting data on disk would consume a lot of disk I/O to ensure that the NDB store data would be fixed long after (so a UTF-8 VARCHAR (256) field will occupy 768 bytes). The form pattern is as follows:
To end the work above, the test console also requires some tools:
CPU:AMD FX-8120 8 kernel Memory: 16G motherboard: m5a88-v (using Lite LINE100TX network card replaces motherboard-mounted REALTEK chipset) disk System: Because there is no disk I/O, do not introduce Ubuntu 10.10MySQL 5.6.5 64-bitmysql Cluster 7.2.6 64-bitmongodb 2.0.5 64-bit
The same 10-time reading of 1 million data distributed on 100 threads ensures that disk I/O is not affected by the results of the test:
MongoDB 110000 rows read per secondmysql Cluster 32000 rows read/Secondmysql with InnoDB 39000 rows read/Secondmysql with Memory/heap 43000 rows read/Secondmysql with Mylsam 28000 rows read/second
MySQL's performance in the last two is certainly disappointing! The test also found that Mylsam only caches its own keys, not the entire data. But Mylsam's performance was commendable and no disk I/O was ever found. In solving this problem we look at the results:
MySQL with MyISAM 37000 rows read per second
MySQL narrowly defeated
Then we tested some other situations, such as using NDB instead of using client_compress. But in contrast to MongoDB's 110,000, MySQL still shows no improvement. Summarize MySQL's best performance in ongoing attempts:
MySQL with memory/heap:43000 rows read/Secondmysql with NDB (no client_compress): 46000 rows per second
Although not all combinations are tested, it is not difficult to infer from the top two results: When MySQL uses the memory storage engine and client_compress, the MySQL Storage engines is definitely faster than 43,000.
It is not difficult to predict that in this case MySQL will cause a high load on the CPU. Because everything is in memory without disk I/O, then the only thing that can bind MySQL here is the CPU. So we bypass the standard server using MySQL Cluster, direct access to NDBAPI. This got a better performance of 90,000, yet it still lags behind MongoDB.
Combined with the above tests, we will also find:
MySQL with NDB (client_compress46000 rows/secondndb 32000 rows read per second
Can we think that client_compress is a pest? Is it possible to speculate that client_compress will reduce the speed 25%-30%?! To see how much the client consumes, the easiest way is to use the Libmysqld-mysql Embedded Library. This allows us to make changes to the benchmark program, as well as ensuring that the data has been written to memory before starting the test. When you are ready to start testing, however, the results are as we speculate. 115,000! MySQL has finally won!
Summary: There is no winner, only constantly improve
Then also tested the MySQL 172,000 of the rapid, but this as a victory over MongoDB is undoubtedly very far-fetched. Yes, what we see here is not the winners and losers, but the MongoDB and the huge lifting space that MySQL has.