PHP uses SWOOLE extension to implement timed synchronization of MySQL Data, and swoolemysql

Source: Internet
Author: User

PHP uses SWOOLE extension to implement timed synchronization of MySQL Data, and swoolemysql

The call system is used between the Nanning Company and several subsidiaries. Now we need to perform a call data analysis. Because the call server of the Branch is in the Intranet, It is mapped by technical means, the network between the Branch and Nanning is unstable, so we need to synchronize the call data of the Branch to Nanning.

The simplest method is to directly configure the master-slave synchronization of MySQL to synchronize data to Nanning. However, the company in the sales call system does not grant us the MySQL permission. So this method can only be abandoned.

So we simply want to use PHP to implement a simple PHP timing synchronization tool, and then the PHP process runs in the background, so we first come to a PHP component: SWOOLE. After discussion, the branch generates a maximum of 5000 pieces of data every day, so this solution is feasible.

We use php swoole as an asynchronous timing task system.

The Master-slave synchronization of the MySQL database is performed by parsing the binary-log in the Master database to synchronize data to the slave database. However, when we use PHP to synchronize data, we can only query data in batches from the master database, and then insert the data to the slave database in Nanning.

The framework we use here isThinkPHP 3.2.

First, install the PHP Extension: SWOOLE. Because no special function is used, we use pecl for quick installation:

pecl install swoole

After the installation is completephp.iniAddextension="swoole.so"After the installation is complete, we usephpinfo()To check whether the operation is successful.

After the installation is successful, we will write the business.

Server

1. Start a backend server and listen to port 9501.

Public function index () {$ serv = new \ swoole_server ("0.0.0.0", 9501); $ serv-> set (['worker _ num' => 1, // generally set to 1-4 times the number of server CPUs 'Task _ worker_num '=> 8, // number of task processes 'daemonize' => 1, // run 'max _ request' => 10000 as a daemon, // The maximum number of requests "task_ipc_mode" => 2 // use message queue for communication, and set it to the competition mode]); $ serv-> on ('receive ', [$ this, 'onreceive']); // receives the task, deliver $ serv-> on ('task', [$ this, 'ontask']); // You can process the task $ serv-> on ('finish ', [$ this, 'onfinish']) in this method. // call $ serv-> start ();} when the task is completed ();}

2. Receiving and shipping tasks

Public function onReceive ($ serv, $ fd, $ from_id, $ data) {// use json_decode to parse task data $ areas = json_decode ($ data, true ); foreach ($ areas as $ area) {// deliver asynchronous tasks $ serv-> task ($ area );}}

3. task execution, Data Query and writing from master database to slave Database

Public function onTask ($ serv, $ task_id, $ from_id, $ task_data) {$ area = $ task_data; // The parameter is Region ID $ rows = 50; // how many lines per page // master database address, Switch master database connection according to the parameter region ($ area) Number/Slave database MySQL instance, according to the parameter region ($ area) number Switch slave database connection // because the program is resident memory, MySQL connection can use persistent connection and reuse it. To use the design mode, you can use the object pool mode Code ...... // The master database is the Branch database, and the slave database is the slave database Code after data is synchronized to Nanning ...... // use $ SQL to obtain the maximum auto-increment FROM the database: SELECT MAX (id) AS maxid FROM ss_cdr_cdr_info limit 1 $ slaveMaxIncrementId = ...; // use $ SQL to obtain the maximum auto-increment value in the master database: SELECT MAX (id) AS maxid FROM ss_cdr_cdr_info limit 1 $ masterMaxIncrementId = ...; // if they are equal, if ($ slaveMaxIncrementId >=$ masterMaxIncrementId) {return false;} // calculate the number of pages based on the number of entries $ dataNumber = ceil ($ masterMa XIncrementId-$ slaveMaxIncrementId); $ eachNumber = ceil ($ dataNumber/$ rows); $ left = 0; // write data cyclically in batches based on the number of pages, remember to promptly clear the memory for ($ I = 0; $ I <$ eachNumber; $ I ++) {$ left = $ I = 0? $ SlaveMaxIncrementId: $ left + $ rows; $ right = $ left + $ rows; // generate batch query conditions // $ where = "id> $ left AND <= $ right"; $ masterData = ...; // query data from the master database $ slaveLastInsertId = ...; // insert to slave database unset ($ masterData, $ slaveLastInsertId);} echo "New AsyncTask [id = $ task_id]". PHP_EOL; $ serv-> finish ("$ area-> OK ");}

4. Call upon task completion

public function onFinish($serv, $task_id, $task_data){ echo "AsyncTask[$task_id] Finish: $task_data".PHP_EOL;}

Client push task

This is basically done. Let's write the client task push.

Public function index () {$ client = new \ swoole_client (SWOOLE_SOCK_TCP); if (! $ Client-> connect ('2017. 0.0.1 ', 9501, 1) {throw new Exception ('link SWOOLE service error');} $ areas = json_encode (['liuzhou', 'yulin', 'beihai ', 'guilin']); // start to traverse and check $ client-> send ($ areas); echo "task sent successfully ". PHP_EOL ;}

This is basically done. For the rest, we will write a shell script for scheduled execution:/home/wwwroot/sync_db/crontab/send. sh

#! /Bin/bashPATH =/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin :~ /Binexport PATH # periodical push of asynchronous data synchronization tasks/usr/bin/php/home/wwwroot/sync_db/server. php home/index

When you use crontab to schedule a task, we add the script to the scheduled task.

# Set to execute a Data Synchronization task at every day for 30 12 * root/home/wwwroot/sync_db/crontab/send. sh # Set to execute the data synchronization task 0 19 * root/home/wwwroot/sync_db/crontab/send at every day. sh

Tips: it is recommended to add the log writing operation in it to know whether the task is pushed and executed successfully.

This is basically done, and the program needs to be optimized ~~~, You are welcome to propose a better method.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.