Recently there is a business writing far greater than reading, reading is also focused on the recent write, which is not very suitable for the use of LEVELDB storage.
Ssdb is the LEVELDB Redis Compatibility Protocol package, and realizes the master-slave synchronization, the source code is not much easy to read, mainly to the Redis command encapsulation, Get/set not say, Leveldb is ordered, compared to redis through the Scan Traversal command, the use of ordered list, The Hset is also stored in Key+fileld/seq-val mode, and the TTL stores a hset save expiration time separately, and a separate thread periodically polls for deletion.
Test Scenario Machine: Two r720xd e5-2620 2.1G (6 core 12 threads) *2/memory 128GB/300GB HDD Data: Key:key 10-bit sequential digital value:50 byte concurrency: Default 100 Connect concurrent clients: Use erlang& Nbsp;erdis Module get/set configuration:leveldb: cache_size:500 BLOCK_SIZE:1 write_buffer_size:64 compression:no & nbsp; max_open_files:1000 Data Volume: 2kw-file Size: 1.3gb-write speed: 7w/s cpu 250% mem:30M- Random read: 5.5w/s cpu 100% mem:1GB- Concurrent Read and write 1:10: read 5k/s write: 5w/s,cpu 250% Summary: Overall performance is similar to GitHub, LEVELDB data storage is very compact, with virtually no additional disk consumption, and memory consumption is much less than Redis data volume: 150 million-File size: 9.6gb-write speed: 7w/s cpu 250% mem:70m-Random read: 1.6w/s cpu:100% mem:70m-Concurrent Read and write: 1:10: 4k/s read 5w/s write 250% summary: Read too random Lru cache can not take effect, Requests to operate the file system, performance degradation; write remains constant data volume: 1 billion-file size: 66gb-write speed: 7w/s 250% mem:80m-Random read: 1.6w/s cpu:180% mem:80m-Concurrent Read and write: 1: 10: 4K/S Read 5w/s Write 280% summary: and 150 million level effect consistent page cache   leveldb default Cache Meta data, and 8M block, this test uses 500M Block_cache, from the test effect, because random read, cache only at 2k data level play a role, and brings high performance improvements.  1.5 billion, 1 billion, completely dependent on the kernel File System page cache, the machine has 128GB of memory, leveldb not use Direct io, files are cached, There is no disk IO in the actual operation. Then use script cleanup: while true; do echo 1 >/proc/ Sys/vm/drop_caches;echo clean cache OK; Sleep 1; done -Random Read approx 160/s cpu:5% mem:120m iostat 95%util- 1 concurrent Random Read, 100 concurrent writes: 90/s read, 1500/s write, Random Read impact Write speed summary: Random IO on the mechanical hard disk is completely no solution, can only rely on the cache, compared to page cache,block_cache more effective, should be more needs to increase bock_cache. Using SSD drives is less expensive than adding memory. Read system call: Open ("./var/data/006090.ldb", o_rdonly) = 27stat ("./var/data/006090.ldb", {st_mode=s_ifreg|0644, st_ size=34349641, ...}) = 0mmap (NULL, 34349641, Prot_read, map_shared, 0) = 0x7f0334f76000madvise (0x7f040457e000, 737280, madv_dontneed) = 0mu Nmap (0x7f03dc2ab000, 34349599) = 0close = 0 multithreaded Ssdb is multi-threaded, But the above test results see obvious multi-core utilization is very low problem, from the source to see:-1 Main thread, responsible for network io-10 read thread, responsible for like scan complex operation read-1 write thread, responsible for write operation disk io-1 leveldb compact thread is to write set a main thread responsible for the network, a write thread responsible for LEVELDB operations, while reading get only the main thread at work.  SSDB related are not configured, simple modification of the source code recompile: -10 thread processing read: 2.5/s cpu 450% 60% consumes in sys, high concurrent read file to kernel bottleneck- Reduced to 3 threading read: 3.2w/s 280% looks more appropriate for concurrency-use CAHCE in 1kw interval random read 7w/s,200%cpu best Combat 1. Write performance test, the write speed has been maintained 7w/s, can satisfy to the multi-demand, Leveldb writes can go to 40w/s, here is limited by the SSDB threading model cannot take advantage of more cores. If required, improve write performance by Pipline, network card interrupt balancing, increased networking, leveldb write thread Count . 2. Read performance general business There are data hotspots, adjustable cache_size, block_size increase cache hit ratio, block_size for cache block size 1k~4m depending on the business value. If the business hotspot is not high, it can only be on SSD drives. Note the use of Page cache, careless emptying will cause a sharp decline in performance, or try to configure a large enough cache_sIze There is a warming up process. 3. compaction The order of this use key is written, because the business key is sequential, and then a period of time is removed from the forward order. Compaction impact will be very small, if the business of a large number of random key write, modify, delete will increase the amount of compaction, need attention.
Leveldb Usage Scenarios