[Benchmark] performance ratio of redis, mongodb, and memcache

Source: Internet
Author: User
Tags benchmark

First, let's talk about my use:

The first memcache is used for the server-side cache of key-value pairs. It is used to store data that is not frequently used but requires rapid response.


Then, in another place, we need to use redis, and then we will study redis. A check shows that you have installed the php extension. Because there is a redis server on the server, you have not installed the extension locally. In fact, the usage is basically the same as that of memcache, which may be different from several parameters. Of course, their cache effects are different. What is the difference? Here is some information, which is different from your own summary.



1. Redis and Memcache both store data in the memory and are memory databases. However, memcache can also be used to cache other things, such as clips and videos.
 

Compare redis, memcache, and mongoDB from the following dimensions.

1. Performance

High performance is not a bottleneck for us.

In general, redis and memcache are similar in TPS, which is larger than mongodb.

2. ease of operation

Single memcache Data Structure

Apsaradb for redis is more abundant. In terms of data operations, apsaradb for redis provides better performance and reduces the number of network I/O operations.

Mongodb supports a wide range of data expressions, indexes, and relational databases. It supports a wide range of query languages.

3. Size of memory space and data volume

Redis 2.0 has added its VM features to break through the limits of physical memory. You can set the expiration time for the key value (similar to memcache)

Memcache can modify the maximum available memory by using the LRU algorithm.

MongoDB is suitable for storing large amounts of data. It relies on the operating system VM for memory management, and the memory consumption is also very high. services should not be with other services.

4. availability (single point of failure)

For single point of failure (spof,

Redis relies on clients for distributed read/write. During master-slave replication, each time the slave node reconnects to the master node, it depends on the entire snapshot, without incremental replication. Due to performance and efficiency problems,

Therefore, the single point of failure (spof) problem is complicated. Automatic sharding is not supported and the consistent hash mechanism needs to be set by the program.

An alternative solution is to use active replication (multiple copies of storage) or incremental replication instead of the redis replication mechanism ), balance between consistency and Performance

Memcache itself has no data redundancy mechanism and is not necessary. For fault prevention, it relies on mature hash or ring algorithms to solve the jitter problem caused by spof.

MongoDB supports master-slave, replicaset (internal use of paxos Election Algorithm, automatic fault recovery), and auto sharding mechanisms, shielding the client from failover and splitting mechanisms.

5. Reliability (persistence)

For data persistence and data recovery,

Redis support (snapshot and AOF): It depends on snapshots for persistence. aof enhances reliability and affects performance.

Memcache is not supported and is usually used for caching to improve performance;

MongoDB 1.8 adopts the binlog method to support persistent Reliability

6. Data Consistency (transaction support)

Memcache ensures consistency with cas in concurrent scenarios

Redis transaction support is weak, and each operation in the transaction can only be performed continuously

MongoDB does not support transactions.

7. Data Analysis

MongoDB has built-in data analysis functions (mapreduce). Other functions are not supported.

8. application scenarios

Redis: smaller data volumes for better performance operations and Computation

Memcache: used to reduce database load and improve performance in a dynamic system. It is used for caching and improving performance (suitable for reading, writing, and sharding for large data volumes)

MongoDB: mainly solves the access efficiency problem of massive data

 

 

 

I have been studying the storage of key-value recently. Let's take a look at it .. The installation and use of some memcache and redis will not be described in detail. Let's just briefly talk about the differences between the two solutions. Some comments and test results may not be enough to explain the problem. please correct me if you have any questions. Because the learning process in the past two days has found that you have been correcting the defects you know, and every day, you will deny the previous day's ideas .. Okay, let's talk less.

After research on 0.5 million data storage, we found that:

Number of single command executions per second

About 30 thousand memcache requests

Redis about 10 thousand times

In addition, one of the major advantages of memcache is that you can directly set the expiration time through a function. redis requires two functions to set both the key-value pair and the expiration time, that is to say, redis has become half of the original efficiency, that is, times, which is a little too slow for most of the needs.

The memcache test code is as follows:

<? Php

$ Mem = new Memcache;

$ Mem-> connect ("127.0.0.1", 11211 );

$ Time_start = microtime_float ();

// Save data

For ($ I = 0; I I <100000; $ I ++ ){

$ Mem-> set ("key $ I", $ I, 0, 3 );

}

$ Time_end = microtime_float ();

$ Run_time = $ time_end-$ time_start;

Echo "$ run_time seconds \ n ";

Function microtime_float ()

{

List ($ usec, $ sec) = explode ("", microtime ());

Return (float) $ usec + (float) $ sec );

}

?>

Redis test code: redis1.php this Code takes about 10 seconds

<? Php

// Connection

$ Redis = new Redis ();

$ Redis-> connect ('2017. 0.0.1 ', 127 );

$ Time_start = microtime_float ();

// Save data

For ($ I = 0; I I <100000; $ I ++ ){

$ Redis-> sadd ("key $ I", $ I );

}

$ Time_end = microtime_float ();

$ Run_time = $ time_end-$ time_start;

Echo "$ run_time seconds \ n ";

// Close the connection

$ Redis-> close ();

 

Function microtime_float ()

{

List ($ usec, $ sec) = explode ("", microtime ());

Return (float) $ usec + (float) $ sec );

}

?>

If you need to set the expiration time while setting the key value, the execution takes about 20 seconds. The test code is as follows: redis2.php

<? Php

// Connection

$ Redis = new Redis ();

$ Redis-> connect ('2017. 0.0.1 ', 127 );

$ Time_start = microtime_float ();

// Save data

For ($ I = 0; I I <100000; $ I ++ ){

$ Redis-> sadd ("key $ I", $ I );

$ Redis-> expire ("key $ I", 3 );

}

$ Time_end = microtime_float ();

$ Run_time = $ time_end-$ time_start;

Echo "$ run_time seconds \ n ";

// Close the connection

$ Redis-> close ();

 

Function microtime_float ()

{

List ($ usec, $ sec) = explode ("", microtime ());

Return (float) $ usec + (float) $ sec );

}

?>

Later, I found on the Internet that redis has a magical function called transaction. It executes a block of code in sequence through multi atomicity to implement a complete function module. Unfortunately, tests show that the number of requests is not reduced when code is executed in multi mode. On the contrary, requests must be sent when multi and exec commands are executed, in this way, the running time is changed to four times the original time, that is, the running time of the four commands. The test code is as follows: redis3.php

<? Php

// Connection

$ Redis = new Redis ();

$ Redis-> connect ('2017. 0.0.1 ', 127 );

$ Time_start = microtime_float ();

// Save data

For ($ I = 0; I I <100000; $ I ++ ){

$ Redis-> multi ();

$ Redis-> sadd ("key $ I", $ I );

$ Redis-> expire ("key $ I", 3 );

$ Redis-> exec ();

}

$ Time_end = microtime_float ();

$ Run_time = $ time_end-$ time_start;

Echo "$ run_time seconds \ n ";

// Close the connection

$ Redis-> close ();

 

Function microtime_float ()

{

List ($ usec, $ sec) = explode ("", microtime ());

Return (float) $ usec + (float) $ sec );

}

?>

The problem has encountered a bottleneck. Many companies need to process massive data, and 5000 times per second is far from meeting their needs. Because redis Master/Slave servers have a greater advantage than memcache, for the sake of future data, I had to use redis. At this time, a new method emerged, namely the pipline function provided by phpredis, which can truly encapsulate several codes into one request, this greatly improves the running speed. 0.5 million data executions took only 58 seconds. The test code is as follows: redis4.php

<? Php

// Connection

$ Redis = new Redis ();

$ Redis-> connect ('2017. 0.0.1 ', 127 );

$ Time_start = microtime_float ();

// Save data

For ($ I = 0; I I <100000; $ I ++ ){

$ Pipe = $ redis-> pipeline ();

$ Pipe-> sadd ("key $ I", $ I );

$ Pipe-> expire ("key $ I", 3 );

$ Replies = $ pipe-> execute ();

}

$ Time_end = microtime_float ();

$ Run_time = $ time_end-$ time_start;

Echo "$ run_time seconds \ n ";

// Close the connection

$ Redis-> close ();

 

Function microtime_float ()

{

List ($ usec, $ sec) = explode ("", microtime ());

Return (float) $ usec + (float) $ sec );

}

?>

Using this operation can perfectly package the value assignment operation and set the expiration time operation to a request for execution, greatly improving the running efficiency.

 

Redis installation: http://mwt198668.blog.163.com/blog/static/48803692201132141755962/

Memcache install: http://blog.csdn.net/barrydiu/article/details/3936270

Redis Master/Slave server: http://www.jzxue.com/fuwuqi/fuwuqijiqunyuanquan/201104/15-7117.html

Memcache settings Master/Slave server: http://www.cnblogs.com/yuanermen/archive/2011/05/19/2051153.html

From: http://blog.csdn.net/a923544197/article/details/7594814

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.