Redis has a major bottleneck when the memory size exceeds 50 GB. To solve this problem, I found some relevant information and found this open-source software. The function is very powerful, including the consistent hash algorithm of ketama of last. fm. The software can fully meet the current requirements of the author.
The source code of the software has been open source on git: https://github.com/twitter/twemproxy
The process of downloading and installing is not described in detail in readme. The following describes how to configure a cluster.
After the installation is complete, modify the configuration file.
alpha: listen: 127.0.0.1:22121 hash: fnv1a_64 distribution: ketama auto_eject_hosts: true redis: true server_retry_timeout: 2000 server_failure_limit: 1 servers: - 127.0.0.1:8604:1 - 127.0.0.1:8605:1 - 127.0.0.1:8606:1 - 127.0.0.1:8607:1
Start two redis Servers Based on the port in the configuration file. The weight is set to 1. You do not need to configure the database index.
After starting redis, start the proxy service.
$ nutcracker -d
Now, the Nutcracker has been started. You can use the redis client to perform a simple test.
$ ./redis-cli -p 22121 set abc abc
$ ./redis-cli -p 22121 get abc
After a simple test, the result is normal.
To test the proxy performance, run the script to Perform Batch write operations:
#!/usr/bin/perl # Program:# # History:# Author: luyao(yaolu1103@gmail.com)# Date: 2013/02/22 17:09:30use strict;my @port = (8604, 8605, 8606, 8607);my $pre = shift;for (my $i = 0; $i< 1000;$i++) { my $num = rand; `./redis-cli -p 22121 set $pre$i $i`;}
Use shell command to call:
$for i in a b c d e f g h i j k l m n o p q r s t u v w x y z;do sh -c "time perl test.pl $i&";done
The result is as follows:
real 0m3.315suser 0m0.457ssys 0m1.473sreal 0m3.391suser 0m0.458ssys 0m1.512sreal 0m3.433suser 0m0.459ssys 0m1.455sreal 0m3.475suser 0m0.449ssys 0m1.465sreal 0m3.442suser 0m0.472ssys 0m1.465sreal 0m3.483suser 0m0.471ssys 0m1.421sreal 0m3.487suser 0m0.467ssys 0m1.459sreal 0m3.440suser 0m0.480ssys 0m1.425sreal 0m3.498suser 0m0.452ssys 0m1.428sreal 0m3.403suser 0m0.445ssys 0m1.411sreal 0m3.505suser 0m0.479ssys 0m1.416sreal 0m3.495suser 0m0.461ssys 0m1.483sreal 0m3.424suser 0m0.465ssys 0m1.422sreal 0m3.477suser 0m0.496ssys 0m1.403sreal 0m3.521suser 0m0.454ssys 0m1.474sreal 0m3.494suser 0m0.491ssys 0m1.399sreal 0m3.550suser 0m0.446ssys 0m1.435sreal 0m3.539suser 0m0.445ssys 0m1.442sreal 0m3.527suser 0m0.501ssys 0m1.447sreal 0m3.477suser 0m0.468ssys 0m1.442sreal 0m3.569suser 0m0.449ssys 0m1.405sreal 0m3.512suser 0m0.462ssys 0m1.428sreal 0m3.539suser 0m0.472ssys 0m1.388sreal 0m3.584suser 0m0.483ssys 0m1.396sreal 0m3.529suser 0m0.468ssys 0m1.396sreal 0m3.554suser 0m0.459ssys 0m1.398s
Based on the above data, the proxy writes 1000*26 data records and 7000 + QPS in. Considering that all redis services and write services are deployed on the same machine, the real capability should be greater than this value.
Finally, check the distribution of keys:
8604: db0: Keys = 7760, expires = 0
8605: db0: Keys = 6010, expires = 0
8606: db0: Keys = 6545, expires = 0
8607: db0: Keys = 5685, expires = 0
The key distribution is basically OK. Considering that keys are relatively simple and may be similar to "a-Z + 0-999", the distribution performance is acceptable.