Redis (Remote Dictionary Server) is an open-source (BSD-licensed), in-memory data structure storage system that can be used as a database, cache, and message middleware. It supports storing more value types, including string (string), list (linked list), set (set), Zset (sorted set-ordered set), and hash (hash type). Redis supports master-slave synchronization. Data can be synchronized from the primary server to any number of slave servers, from the server to the primary server that is associated with other slave servers. This enables Redis to perform single-layer tree replication.
MySQL and Redis, with their own data synchronization mechanism, like the more commonly used MySQL Master/slave mode, is the slave-side analysis of Master Binlog to achieve, such data replication is actually an asynchronous process, Only when the server is in the same intranet, the asynchronous delay can almost be ignored. So theoretically we can also analyze MySQL's Binlog file and insert data into Redis in the same way. However, this requires a very deep understanding of binlog files as well as MySQL, and because Binlog exists statement/row/mixedlevel many forms, it is very important to analyze binlog to achieve synchronization. So here's a cheaper way to borrow the more mature MySQL UDF, put the MySQL data into Gearman first, and then sync the data to Redis with a PHP Gearman Worker that you write. There are many more processes than the analysis of Binlog, but the implementation costs are lower and easier to operate.
Gearman is a distributed task-distribution framework that supports:
The Gearman Job Server:gearman Core program needs to be compiled and installed and run in the background as a daemon.
Gearman Client: Can be understood as a requester of a task.
Gearman worker: The real performer of the task, generally need to write their own specific logic and run through the daemon process, Gearman Worker receives Gearman client to pass the task content, will be processed sequentially.
Approximate process:
The MySQL trigger, which is equivalent to the Gearman client. To modify a table, inserting a table is equivalent to sending a task directly. The relational data is then mapped to a JSON format through the Lib_mysqludf_json UDF library function, then the task is added to the Gearman task queue through the GEARMAN-MYSQL-UDF plug-in, and finally through the Redis_ worker.php, the worker side of Gearman, completes the Redis database update.
The reason for using Lib_mysqludf_json is that because Gearman only accepts strings as entry parameters, you can encode the data in MySQL as a JSON string by Lib_mysqludf_json.
The Lib_mysqludf_json UDF library function maps relational data to JSON format. In general, the data in a database is mapped in JSON format and is converted by a program.
1. Server2:
To Configure Redis:
yum install -y mysql-server
/etc/init.d/mysqld start
netstat -antlpe ## mysql port 3306 is open
650) this.width = 650; "src =" http://s3.51cto.com/wyfs02/M01/86/75/wKiom1e_HN7iLn1WAABaZagkjF8469.png-wh_500x0-wm_3-wmp_4-s_3167710958.png "title =" Screenshot from 2016 -08-25 19-09-51.png "alt =" wKiom1e_HN7iLn1WAABaZagkjF8469.png-wh_50 "/>
tar zxf redis-3.0.2.tar.gz
cd redis-3.0.2
yum install -y gcc
make
make install
cd /root/redis-3.0.2/utils
./install_server.sh ## Install the server launcher
650) this.width = 650; "src =" http://s5.51cto.com/wyfs02/M02/86/74/wKioL1e_HP_yf18sAAEQAPZnYTA602.png-wh_500x0-wm_3-wmp_4-s_3509203757.png "title =" Screenshot from 2016 -08-25 19-13-13.png "alt =" wKioL1e_HP_yf18sAAEQAPZnYTA602.png-wh_50 "/>
/etc/init.d/redis_6379 start
redis-cli monitor ## monitor data, okshi means monitoring is normal
redis-cli ## temporarily set some redis data
650) this.width = 650; "src =" http://s4.51cto.com/wyfs02/M01/86/74/wKioL1e_HRey4oeyAABDq0Hjk54605.png-wh_500x0-wm_3-wmp_4-s_416302089.png "title =" Screenshot from 2016 -08-25 19-21-23.png "alt =" wKioL1e_HRey4oeyAABDq0Hjk54605.png-wh_50 "/>
redis as mysql cache server:
cd / root
yum install nginx-1.8.0-1.el6.ngx.x86_64.rpm php-5.3.3-38.el6.x86_64.rpm
php-cli-5.3.3-38.el6.x86_64.rpm php-common-5.3.3-38.el6.x86_64.rpm php-devel-5.3.3-38.el6.x86_64.rpm php-fpm-5.3 .3-38.el6.x86_64.rpm
php-gd-5.3.3-38.el6.x86_64.rpm php-mbstring-5.3.3-38.el6.x86_64.rpm
php-mysql-5.3.3-38.el6.x86_64.rpm php-pdo-5.3.3-38.el6.x86_64.rpm
php -m ## See supported software
vim /etc/php.ini
date.timezone = Asia / Shanghai
cat / etc / passwd
vim /etc/php-fpm.d/www.conf
user = nginx
group = nginx
/etc/init.d/php-fpm restart
Configure nginx:
vim /etc/nginx/conf.d/default.conf
location / {
root / usr / share / nginx / html;
index index.html index.htm index.php;
}
location ~ \ .php $ {
root html;
fastcgi_pass 127.0.0.1:9000; ## fastcgi communication
fastcgi_index index.php; ## fastcgi main file
fastcgi_param SCRIPT_FILENAME / usr / share / nginx / html $ fastcgi_script_name;
include fastcgi_params;
}
cd / usr / share / nginx / html /
vim index.php
<? php
phpinfo ()
?>
/etc/init.d/nginx start
Website visit http://172.25.85.2
Run php on nginx:
yum install -y unzip
unzip phpredis-master.zip
cd phpredis-master
phpize
./configure
make
make install
cd / usr / lib64 / php / modules /
ls
cd /etc/php.d
cp mysql.ini redis.ini
vim redis.ini
Modify the content to the following:
extension = redis.so
/etc/init.d/php-fpm reload
php -m | grep redis
mysql
mysql> show databases; ## Check if there is a test database, if not, create one.
mysql> quit
cd / root / redis
mysql <test.sql
cat test.sql
use test;
CREATE TABLE `test` (` id` int (7) NOT NULL AUTO_INCREMENT, `name` char (8)
DEFAULT NULL, PRIMARY KEY (`id`)) ENGINE = InnoDB DEFAULT CHARSET = utf8;
INSERT INTO `test` VALUES (1,‘ test1 ’), (2,‘ test2 ’), (3,‘ test3 ’), (4,‘ test4 ’)
, (5, ‘test5’), (6, ‘test6’), (7, ‘test7’), (8, ‘test8’), (9, ‘test9’)
#DELIMITER $$
#CREATE TRIGGER datatoredis AFTER UPDATE ON test FOR EACH ROW BEGINmysql
<test.sql
SET @ RECV = gman_do_background (‘syncToRedis’,
json_object (NEW.id as `id`, NEW.name as` name`));
END $$
#DELIMITER;
mysql
mysql> use test;
mysql> select * from test;
mysql> grant all on *. * to [email protected] identified by ‘westos’;
mysql> quit
mysql -uredis -pwestos
mysql> use test;
mysql> select * from test;
650) this.width = 650; "src =" http://s5.51cto.com/wyfs02/M00/86/75/wKiom1e_Hgnj4jrtAAAn_9aTA9s117.png-wh_500x0-wm_3-wmp_4-s_2396733760.png "title =" Screenshot from 2016 -08-25 19-49-08.png "alt =" wKiom1e_Hgnj4jrtAAAn_9aTA9s117.png-wh_50 "/>
mysql> quit
cp /root/test.php / usr / share / nginx / html /
cd / usr / share / nginx / html /
rm -rf index.php
mv test.php index.php
Detection:
Web page visit http://172.25.85.2 ## has completed the cache of redis on mysql
650) this.width = 650; "src =" http://s4.51cto.com/wyfs02/M00/86/74/wKioL1e_HjeyJIJHAAAYQFeIVsc679.png-wh_500x0-wm_3-wmp_4-s_2187318874.png "title =" Screenshot from 2016 -08-25 20-16-53.png "alt =" wKioL1e_HjeyJIJHAAAYQFeIVsc679.png-wh_50 "/>
redis-cli
127.0.0.1:6379> get 1
"test1"
127.0.0.1:6379> get 2
"test2"
127.0.0.1:6379> quit
mysql
mysql> use test;
mysql> update test set name = ‘westos’ where id = 1;
mysql> quit
Web page access http://172.25.85.2 ## Updated the data in mysql, but the data in redis will not be updated automatically
redis-cli
127.0.0.1:6379> get 1
"test1"
redis-cli
127.0.0.1:6379> del 1
Website visit http://172.25.85.2
redis-cli
127.0.0.1:6379> set 1 westos
Visit http: //172.25.85. 2
Gearman is a framework for supporting distributed task distribution:
Gearman Job Server: The Gearman core program needs to be compiled and installed and run in the background as a daemon process.
Gearman Client: can be understood as the requester of the task.
Gearman Worker: The real executor of the task usually needs to write specific logic and run it as a daemon. After receiving the task content passed by the Gearman Client, the Gearman Worker will process it in order.
Approximate process:
The mysql trigger to be written below is equivalent to Gearman's client. Modifying a table, inserting a table is equivalent to directly sending a task. Then use the lib_mysqludf_json UDF library function to map the relational data to JSON format, and then add the task to Gearman's task queue through the gearman-mysql-udf plugin, and finally use redis_worker.php, which is the worker side of Gearman to complete Dis database update.
2. Configure gearman to automatically update data in redis:
cd / root / redis
yum install -y gearmand-1.1.8-2.el6.x86_64.rpm libgearman-1.1.8-2.el6.x86_64.rpm
/etc/init.d/gearmand start
netstat -antlpe
tar zxf gearman-1.1.2.tgz
cd gearman-1.1.2
phpize
./configure
cd ..
yum install -y libgearman-devel-1.1.8-2.el6.x86_64.rpm
libevent-devel-1.4.13-4.el6.x86_64.rpm
libevent-doc-1.4.13-4.el6.noarch.rpm
libevent-headers-1.4.13-4.el6.noarch.rpm
cd gearman-1.1.2
./configure
650) this.width = 650; "src =" http://s2.51cto.com/wyfs02/M00/86/74/wKioL1e_HuGSC-mMAABdW2C0bSA185.png-wh_500x0-wm_3-wmp_4-s_2269101014.png "title =" Screenshot from 2016-08-25 20-33-42.png "alt =" wKioL1e_HuGSC-mMAABdW2C0bSA185.png-wh_50 "/>
make
make install
cd /etc/php.d
cp redis.ini gearman.ini
vim gearman.ini
Modify the content to the following:
extension = gearman.so
/etc/init.d/php-fpm reload
php -mgrep gearman
yum install -y mysql-devel
cd / root / redis
The lib_mysqludf_json UDF library function maps relational data to JSON format. Usually, the data in the database is mapped to JSON format, which is converted programmatically.
unzip lib_mysqludf_json-master.zip
cd lib_mysqludf_json-master
gcc $ (mysql_config --cflags) -shared -fPIC -o lib_mysqludf_json.so lib_mysqludf_json.c
ls
mysql
mysql> show global variables like ‘plugin_dir’;
650) this.width = 650; "src =" http://s2.51cto.com/wyfs02/M02/86/74/wKioL1e_HwnRfycNAAAkNYLs70s301.png-wh_500x0-wm_3-wmp_4-s_305466965.png "title =" Screenshot from 2016 -08-25 20-38-37.png "alt =" wKioL1e_HwnRfycNAAAkNYLs70s301.png-wh_50 "/>
mysql> quit
cd / root / lib_mysqludf_json-master
cp lib_mysqludf_json.so / usr / lib64 / mysql / plugin
#Register UDF Functions
mysql
mysql> create function json_object returns string soname ‘lib_mysqludf_json.so’;
mysql> select * from mysql.func;
650) this.width = 650; "src =" http://s4.51cto.com/wyfs02/M00/86/75/wKiom1e_HxnynDT3AAAipeXuOrE842.png-wh_500x0-wm_3-wmp_4-s_285639718.png "title =" Screenshot from 2016 -08-25 20-39-20.png "alt =" wKiom1e_HxnynDT3AAAipeXuOrE842.png-wh_50 "/>
mysql> quit
#Install gearman-mysql-udf
This plugin is used to manage a distributed queue that calls Gearman.
cd / root / redis
tar zxf gearman-mysql-udf-0.6.tar.gz
cd gearman-mysql-udf-0.6
yum install -y gcc-c ++
./configure --libdir = / usr / lib64 / mysql / plugin /
make
make install
cd / usr / lib64 / mysql / plugin /
mysql -p
#Register UDF Functions
mysql> create function gman_do_background returns string soname ‘libgearman_mysql_udf.so’;
mysql> create function gman_servers_set returns string soname ‘libgearman_mysql_udf.so’;
#View function
mysql> select * from mysql.func;
650) this.width = 650; "src =" http://s5.51cto.com/wyfs02/M00/86/74/wKioL1e_HzXhokc7AAA8uoe-1L8392.png-wh_500x0-wm_3-wmp_4-s_3541939056.png "title =" Screenshot from 2016-08-25 20-43-20.png "alt =" wKioL1e_HzXhokc7AAA8uoe-1L8392.png-wh_50 "/>
mysql> quit
netstat -antlpe
#Specify gearman's service information
mysql
mysql> select gman_servers_set (‘127.0.0.1:4730’);
mysql> quit
#Write mysql trigger (written according to the actual situation)
cd / root / redis
vim test.sql
Uncomment the trigger, add comments before creating the table and inserting data statements
DELIMITER $$
CREATE TRIGGER datatoredis AFTER UPDATE ON test FOR EACH ROW BEGIN
SET @ RECV = gman_do_background (‘syncToRedis’, json_object (NEW.id as `id`, NEW.name as` name`));
END $$
DELIMITER;
mysql <test.sql
mysql
mysql> show triggers from test;
mysql> quit
cp worker.php / usr / local / bin /
cd / usr / local / bin /
vim worker.php
nohup php worker.php &
ps ax
mysql
mysql> use test;
mysql> update test set name = ‘redhat’ where id = 1;
mysql> quit
Website visit http://172.25.19.1 (Refresh) ## Update (to be verified)
redis-cli
127.0.0.1:6379> get 1
"redhat"
3.Redis master-slave replication
If a slave is set, whether it is the first time to link or relink the master, the slave will send a synchronization command and the master will start saving in the background to collect all the commands for modifying data. When the background save is complete, the master will transfer this data file to the slave, then save it on disk, and load it into memory; the master then sends all the collected data modification commands, which is like a stream command, which is implemented by the redis protocol itself of.
On server3 and server4:
vim /etc/redis/6379.conf
slaveof 172.25.85.2 6379
/etc/init.d/redis_6379 restart
server2:
vim /root/redis-3.0.2/sentinel.conf ## Configure sentinel
sentinel monitor mymaster 172.25.85.2 6379 2
## Sentinel to monitor a master server named mymaster. The IP address of this master server is 127.0.0.1 and the port number is 6379. To judge this master server to be invalid, at least 2 Sentinel consents are needed (as long as the number of Sentinel is not agreed) Compliance, automatic failover will not be performed).
sentinel down-after-milliseconds mymaster 10000
## Sentinel The number of milliseconds required to assume that the server has gone offline.
sentinel failover-timeout mymaster 60000
grep -v ^ # sentinel.conf> /etc/sentinel.conf
redis-sentinel /etc/sentinel.conf
tu
scp /etc/sentinel.conf [email protected]: / etc /
scp /etc/sentinel.conf [email protected]: / etc /
redis-cli -h 172.25.45.7 -p 26379 info
tu
server4:
vim /etc/sentinel.conf
port 26379
dir "/ tmp"
.
sentinel monitor mymaster 172.25. 6379 2
sentinel down-after-milliseconds mymaster 10000
sentinel failover-timeout mymaster 60000
The sentinel parallel-syncs mymaster 1 ## option specifies how many slaves can simultaneously synchronize the new master when performing a failover. The smaller this number, the longer it takes to complete the failover.
scp /etc/sentinel.conf [email protected]: / etc /
Then check it on server7: redis-cli -h 172.25.85.2 -p 26379 info
tu
redis cache mysql data