First of all, I use the situation:
The first used memcache, a server-side cache of key-value pairs, used to store some commonly used data that is not very large, but that requires rapid response
Then, in another place, Redis is used, and then the next Redis is studied. A look, shows that they installed the PHP extension, because there is a server on the Redis service side, their own local is not installed, in fact, the usage and memcache basically the same, may be a few parameters are different. Of course, the effect of their cache is not the same, specific where not the same, a bit of information, and their summary
1. Redis and memcache all store data in memory, which is the memory database. But memcache can also be used to cache other things, examples, videos, and so on.
from the following several dimensions, the Redis , memcache , MongoDB make a comparison, welcome to shoot Bricks
1 , Performance
are relatively high, performance is not a bottleneck for us
Generally speaking, TPS Aspect Redis and the memcache almost, more than MongoDB
2 , the convenience of Operation
memcache Single Data structure
Redis enrich some, data manipulation aspect, Redis a little better, less network IO number of
MongoDB support rich data expression, index, most similar relational database, support query language is very rich
3 , the size of the memory space, and the size of the data volume
Redis in the 2.0 after the version has added its own VMS limits of physical memory, which can be Key Value Set the expiration time (similar memcache )
memcache maximum available memory can be modified , Adopt LRU algorithm
MongoDB suitable for storage of large data volumes, dependent on operating system VMS do memory management, eat memory is also very bad, service not with other services together
4 , availability (single point of issue)
For a single point of problem,
Redis , relies on the client to implement distributed read-write; master-slave replication relies on the entire snapshot every time the primary node is reconnected from the node , no incremental replication, due to performance and efficiency issues,
so the single point problem is more complex; sharding, need to rely on program setting consistency Hash mechanism.
an alternative is to not Redis Its own replication mechanism, using its own proactive replication (multiple storage), or change the way to incremental replication (need to implement), consistency issues and performance tradeoffs
Memcache There is no data redundancy mechanism, nor is it necessary; for fault prevention, rely on mature Hash or a ring-shaped algorithm to solve the jitter problem caused by a single point of failure.
MongoDB Support Master-slave,replicaset (Internal use Paxos election algorithm, automatic failure recovery) , Auto sharding mechanism to shield the client from the failover and segmentation mechanisms.
5 , Reliability (Persistence)
For data persistence and data recovery,
Redis Support (Snapshots, AOF ): Dependent on the snapshot for persistence, aof increased reliability while impacting performance
memcache not supported, usually used in cache , improve performance;
MongoDB from 1.8 version starts with Binlog Way to support persistent reliability
6 , data consistency (transactional support)
Memcache in concurrent scenarios, use the CAs Consistency Assurance
Redis transaction support is weaker and can only guarantee continuous execution of each operation in a transaction
MongoDB Transaction not supported
7 , data analysis
MongoDB built-in features for data analysis (MapReduce), others do not support
8 , Application Scenarios
Redis : More performance operations and calculations with smaller data volumes
memcache : Used to reduce the database in a dynamic system load, improve performance ; do cache, improve performance (suitable for read and write less, for a large amount of data, you can use sharding )
MongoDB: The main solution to the massive data access efficiency problem
I've been studying Key-value's storage for a while, and simply remember how it feels. The installation and use of some memcache and Redis will not be mentioned. Just simply say the difference between the two options. Some impressions and test results may not be enough to explain the problem, please correct me if there is anything wrong with you. Because these two days in the process of learning to find that they have been correcting the shortcomings of their own understanding, every day, will negate the idea of the morning before. All right, crap less.
A study of 500,000 data stores found that:
Number of single instruction executions per second
Memcache approx. 30,000 times
Redis approx. 10,000 times
Moreover, one of the great advantages of memcache is that it is possible to set the expiration time directly through a function, and redis needs two functions to set both the key value pair and the expiration time, that is, the efficiency of redis at this point becomes the original half, that is, 5,000 times, which for most of the demand, a bit too slow.
The test code for Memcache is as follows:
$mem = new Memcache;
$mem->connect ("127.0.0.1", 11211);
$time _start = Microtime_float ();
Save data
for ($i = 0; $i < 100000; $i + +) {
$mem->set ("key$i", $i, 0, 3);
}
$time _end = Microtime_float ();
$run _time = $time _end-$time _start;
echo "spents $run _time s \ n";
function Microtime_float ()
{
List ($usec, $sec) = Explode ("", Microtime ());
return (float) $usec + (float) $sec);
}
?>
The test code for Redis is as follows: redis1.php This code takes about 10 seconds
Connection
$redis = new Redis ();
$redis->connect (' 127.0.0.1 ', 6379);
$time _start = Microtime_float ();
Save data
for ($i = 0; $i < 100000; $i + +) {
$redis->sadd ("key$i", $i);
}
$time _end = Microtime_float ();
$run _time = $time _end-$time _start;
echo "spents $run _time s \ n";
Close connection
$redis->close ();
function Microtime_float ()
{
List ($usec, $sec) = Explode ("", Microtime ());
return (float) $usec + (float) $sec);
}
?>
If you need to set the expiration time while setting the key value, it takes about 20 seconds to execute, and the test code is as follows: redis2.php
Connection
$redis = new Redis ();
$redis->connect (' 127.0.0.1 ', 6379);
$time _start = Microtime_float ();
Save data
for ($i = 0; $i < 100000; $i + +) {
$redis->sadd ("key$i", $i);
$redis->expire ("Key$i", 3);
}
$time _end = Microtime_float ();
$run _time = $time _end-$time _start;
echo "spents $run _time s \ n";
Close connection
$redis->close ();
function Microtime_float ()
{
List ($usec, $sec) = Explode ("", Microtime ());
return (float) $usec + (float) $sec);
}
?>
Later on the internet discovered that Redis has a magical function called transactions, through the multi atom of a block of code executed sequentially, so as to achieve a complete function module execution. Unfortunately, the test found that the execution of the code in the multi way did not reduce the number of requests, instead of executing the multi instruction and the EXEC command to send the request, thus turning the run time to four times times the original, that is, four instructions run time. The test code is as follows: redis3.php
Connection
$redis = new Redis ();
$redis->connect (' 127.0.0.1 ', 6379);
$time _start = Microtime_float ();
Save data
for ($i = 0; $i < 100000; $i + +) {
$redis->multi ();
$redis->sadd ("key$i", $i);
$redis->expire ("Key$i", 3);
$redis->exec ();
}
$time _end = Microtime_float ();
$run _time = $time _end-$time _start;
echo "spents $run _time s \ n";
Close connection
$redis->close ();
function Microtime_float ()
{
List ($usec, $sec) = Explode ("", Microtime ());
return (float) $usec + (float) $sec);
}
?>
Problems have bottlenecks, there are a lot of companies need massive data processing, 5,000 times per second far from meeting demand, and then because the Redis master-slave server than memcache have a greater advantage, for the sake of future data, have to use Redis, this time there is a new way, That is, Phpredis provides the Pipline function, this function can truly encapsulate a few pieces of code as a request, which greatly improves the speed of operation, 500,000 times the data execution only 58 seconds. The test code is as follows: redis4.php
Connection
$redis = new Redis ();
$redis->connect (' 127.0.0.1 ', 6379);
$time _start = Microtime_float ();
Save data
for ($i = 0; $i < 100000; $i + +) {
$pipe = $redis->pipeline ();
$pipe->sadd ("key$i", $i);
$pipe->expire ("Key$i", 3);
$replies = $pipe->execute ();
}
$time _end = Microtime_float ();
$run _time = $time _end-$time _start;
echo "spents $run _time s \ n";
Close connection
$redis->close ();
function Microtime_float ()
{
List ($usec, $sec) = Explode ("", Microtime ());
return (float) $usec + (float) $sec);
}
?>
This operation can be very perfect to package the assignment operation and set expiration time operation to a request to execute, greatly improve the efficiency of operation.
Redis Installation: http://mwt198668.blog.163.com/blog/static/48803692201132141755962/
Memcache Installation: http://blog.csdn.net/barrydiu/article/details/3936270
Redis Settings Master-slave server: http://www.jzxue.com/fuwuqi/fuwuqijiqunyuanquan/201104/15-7117.html
Memcache setting up a master-slave server: http://www.cnblogs.com/yuanermen/archive/2011/05/19/2051153.html
This article from: http://blog.csdn.net/a923544197/article/details/7594814
http://www.bkjia.com/PHPjc/802362.html www.bkjia.com true http://www.bkjia.com/PHPjc/802362.html techarticle First of all, I use the situation: the first use of the memcache, for the key-value pair relationship server-side cache, used to store some of the commonly used is not very large, but need to quickly respond to the data then, in ...