KT:
Http://fallabs.com/kyototycoon/
Note:
1) the client and server are both 8-core and gigabit NICs.
2) The table is horizontally the data size of value.
3) The table value is the number of operations per second.
4) KC creates a hash database.
|
100b |
1kb |
10kb |
100kb |
1 MB |
KT write |
35599 |
35075 |
34518 |
33189 |
30562 |
KT read |
37939 |
40209 |
38095 |
38197 |
40518 |
KT Deletion |
39968 |
39541 |
39200 |
37091 |
37664 |
Memcache write |
28735 |
29394 |
28977 |
27382 |
27824 |
Memcache read |
30515 |
30931 |
30057 |
28968 |
30721 |
Delete memcache |
32362 |
32278 |
31715 |
30609 |
32175 |
Note:
1) KT enables GC and memcache protocol extension. A client is used.
2) use parallel libraries and multi-thread operations during client testing.
3) values are all strings and do not involve serialized deserialization operations.
Conclusion:
Why is KT better than memcached? In addition, the disk will be released immediately after the data is removed. Unlike MongoDB and MSSQL, the disk will occupy files, unless the repair will contract.
However, will the data volume change when the data volume is too large? (The following test uses three machines, that is, one machine to store 1/3 of the data)
Let's take a look at the Read and Write efficiency of KT every 1 million more data records. The table value is also the number of operations per second:
|
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
KT write |
21010 |
11660 |
6888 |
4409 |
3311 |
2637 |
2387 |
1855 |
1803 |
1590 |
KT read |
26063 |
25241 |
26139 |
25931 |
25810 |
26062 |
26106 |
26055 |
25465 |
3208 |
Conclusion:
There is basically no drop in reading and writing, and the total data volume is tens of millions (because it is three machines, the maximum data volume of one machine is only 5 million) when compared with empty data, the write speed is only 10%, which is inferior to that of MongoDB, And the read speed is superior to that of MongoDB.
The last read efficiency is greatly reduced. When the previously stored data is out of date, consider whether GC Recycle of expired data will affect the Read efficiency?
Finally, let's add one point: by default, memcache plugin does not support gets. If you want to obtain it in batches, you can directly use the GET command.