[Benchmark] performance ratio of redis, mongodb, and memcache. First, let me talk about the situation I used: the first memcache, used for the key-value pair relationship server side cache, used to store some commonly used data that is not very large but needs to be quickly reflected. then, let's talk about my own usage:
The first memcache is used for the server-side cache of key-value pairs. it is used to store data that is not frequently used but requires rapid response.
Then, in another place, we need to use redis, and then we will study redis. A check shows that you have installed the php extension. because there is a redis server on the server, you have not installed the extension locally. In fact, the usage is basically the same as that of memcache, which may be different from several parameters. Of course, their cache effects are different. what is the difference? here is some information, which is different from your own summary.
1. Redis and Memcache both store data in the memory and are memory databases. However, memcache can also be used to cache other things, such as clips and videos.
Compare redis, memcache, and mongoDB from the following dimensions.
1. performance
High performance is not a bottleneck for us.
In general, redis and memcache are similar in TPS, which is larger than mongodb.
2. ease of operation
Single memcache data structure
Apsaradb for redis is more abundant. in terms of data operations, apsaradb for redis provides better performance and reduces the number of network I/O operations.
Mongodb supports a wide range of data expressions, indexes, and relational databases. it supports a wide range of query languages.
3. size of memory space and data volume
Redis 2.0 has added its VM features to break through the limits of physical memory. you can set the expiration time for the key value (similar to memcache)
Memcache can modify the maximum available memory by using the LRU algorithm.
MongoDB is suitable for storing large amounts of data. it relies on the operating system VM for memory management, and the memory consumption is also very high. services should not be with other services.
4. Availability (single point of failure)
For single point of failure (SPOF,
Redis relies on clients for distributed read/write. during master-slave replication, each time the Slave node reconnects to the master node, it depends on the entire snapshot, without incremental replication. due to performance and efficiency problems,
Therefore, the single point of failure (SPOF) problem is complicated. automatic sharding is not supported and the consistent hash mechanism needs to be set by the program.
An alternative solution is to use active replication (multiple copies of storage) or incremental replication instead of the redis replication mechanism ), balance between consistency and performance
Memcache itself has no data redundancy mechanism and is not necessary. for fault prevention, it relies on mature hash or ring algorithms to solve the jitter problem caused by SPOF.
MongoDB supports master-slave, replicaset (internal use of paxos election algorithm, automatic fault recovery), and auto sharding mechanisms, shielding the client from failover and splitting mechanisms.
5. Reliability (persistence)
For data persistence and data recovery,
Redis support (snapshot and AOF): It depends on snapshots for persistence. aof enhances reliability and affects performance.
Memcache is not supported and is usually used for caching to improve performance;
MongoDB 1.8 adopts the binlog method to support persistent reliability
6. data consistency (transaction support)
Memcache ensures consistency with cas in concurrent scenarios
Redis transaction support is weak, and each operation in the transaction can only be performed continuously
MongoDB does not support transactions.
7. Data Analysis
MongoDB has built-in data analysis functions (mapreduce). other functions are not supported.
8. application scenarios
Redis: smaller data volumes for better performance operations and computation
Memcache: used to reduce database load and improve performance in a dynamic system. it is used for caching and improving performance (suitable for reading, writing, and sharding for large data volumes)
MongoDB: mainly solves the access efficiency problem of massive data
I have been studying the storage of key-value recently. let's take a look at it .. The installation and use of some memcache and redis will not be described in detail. Let's just briefly talk about the differences between the two solutions. Some comments and test results may not be enough to explain the problem. please correct me if you have any questions. Because the learning process in the past two days has found that you have been correcting the defects you know, and every day, you will deny the previous day's ideas .. Okay, let's talk less.
After research on 0.5 million data storage, we found that:
Number of single command executions per second
About 30 thousand memcache requests
Redis about 10 thousand times
In addition, one of the major advantages of memcache is that you can directly set the expiration time through a function. redis requires two functions to set both the key-value pair and the Expiration Time, that is to say, redis has become half of the original efficiency, that is, times, which is a little too slow for most of the needs.
The memcache test code is as follows:
$ Mem = new Memcache;
$ Mem-> connect ("127.0.0.1", 11211 );
$ Time_start = microtime_float ();
// Save data
For ($ I = 0; I I <100000; $ I ++ ){
$ Mem-> set ("key $ I", $ I, 0, 3 );
}
$ Time_end = microtime_float ();
$ Run_time = $ time_end-$ time_start;
Echo "$ run_time seconds \ n ";
Function microtime_float ()
{
List ($ usec, $ sec) = explode ("", microtime ());
Return (float) $ usec + (float) $ sec );
}
?>
Redis test code: redis1.php this code takes about 10 seconds
// Connection
$ Redis = new Redis ();
$ Redis-> connect ('2017. 0.0.1 ', 127 );
$ Time_start = microtime_float ();
// Save data
For ($ I = 0; I I <100000; $ I ++ ){
$ Redis-> sadd ("key $ I", $ I );
}
$ Time_end = microtime_float ();
$ Run_time = $ time_end-$ time_start;
Echo "$ run_time seconds \ n ";
// Close the connection
$ Redis-> close ();
Function microtime_float ()
{
List ($ usec, $ sec) = explode ("", microtime ());
Return (float) $ usec + (float) $ sec );
}
?>
If you need to set the expiration time while setting the key value, the execution takes about 20 seconds. the test code is as follows: redis2.php
// Connection
$ Redis = new Redis ();
$ Redis-> connect ('2017. 0.0.1 ', 127 );
$ Time_start = microtime_float ();
// Save data
For ($ I = 0; I I <100000; $ I ++ ){
$ Redis-> sadd ("key $ I", $ I );
$ Redis-> expire ("key $ I", 3 );
}
$ Time_end = microtime_float ();
$ Run_time = $ time_end-$ time_start;
Echo "$ run_time seconds \ n ";
// Close the connection
$ Redis-> close ();
Function microtime_float ()
{
List ($ usec, $ sec) = explode ("", microtime ());
Return (float) $ usec + (float) $ sec );
}
?>
Later, I found on the Internet that redis has a magical function called transaction. it executes a block of code in sequence through multi atomicity to implement a complete function module. Unfortunately, tests show that the number of requests is not reduced when code is executed in multi mode. On the contrary, requests must be sent when multi and exec commands are executed, in this way, the running time is changed to four times the original time, that is, the running time of the four commands. The test code is as follows: redis3.php
// Connection
$ Redis = new Redis ();
$ Redis-> connect ('2017. 0.0.1 ', 127 );
$ Time_start = microtime_float ();
// Save data
For ($ I = 0; I I <100000; $ I ++ ){
$ Redis-> multi ();
$ Redis-> sadd ("key $ I", $ I );
$ Redis-> expire ("key $ I", 3 );
$ Redis-> exec ();
}
$ Time_end = microtime_float ();
$ Run_time = $ time_end-$ time_start;
Echo "$ run_time seconds \ n ";
// Close the connection
$ Redis-> close ();
Function microtime_float ()
{
List ($ usec, $ sec) = explode ("", microtime ());
Return (float) $ usec + (float) $ sec );
}
?>
The problem has encountered a bottleneck. many companies need to process massive data, and 5000 times per second is far from meeting their needs. because redis master/slave servers have a greater advantage than memcache, for the sake of future data, I had to use redis. at this time, a new method emerged, namely the pipline function provided by phpredis, which can truly encapsulate several codes into one request, this greatly improves the running speed. 0.5 million data executions took only 58 seconds. The test code is as follows: redis4.php
// Connection
$ Redis = new Redis ();
$ Redis-> connect ('2017. 0.0.1 ', 127 );
$ Time_start = microtime_float ();
// Save data
For ($ I = 0; I I <100000; $ I ++ ){
$ Pipe = $ redis-> pipeline ();
$ Pipe-> sadd ("key $ I", $ I );
$ Pipe-> expire ("key $ I", 3 );
$ Replies = $ pipe-> execute ();
}
$ Time_end = microtime_float ();
$ Run_time = $ time_end-$ time_start;
Echo "$ run_time seconds \ n ";
// Close the connection
$ Redis-> close ();
Function microtime_float ()
{
List ($ usec, $ sec) = explode ("", microtime ());
Return (float) $ usec + (float) $ sec );
}
?>
Using this operation can perfectly package the value assignment operation and set the Expiration Time operation to a request for execution, greatly improving the running efficiency.
Redis installation: http://mwt198668.blog.163.com/blog/static/48803692201132141755962/
Memcache install: http://blog.csdn.net/barrydiu/article/details/3936270
Redis master/slave server: http://www.jzxue.com/fuwuqi/fuwuqijiqunyuanquan/201104/15-7117.html
Memcache settings master/slave server: http://www.cnblogs.com/yuanermen/archive/2011/05/19/2051153.html
From: http://blog.csdn.net/a923544197/article/details/7594814
The memcache used by the primary node is used for the server-side cache of key-value pairs. it is used to store data that is not frequently used but needs to be quickly reflected...