MySQL migration scenarios under different scenarios

Source: Internet
Author: User
Tags rsync

MySQL migration scenarios under different scenarios

Posted in MySQL and tagged MySQL, data migration, scheme on Sep 15, 2015. viewd 2684 times.

文/温国兵

A catalog
    • A catalog
    • Second why migrate
    • Three MySQL migration Scenarios Overview
    • Four MySQL Migration Combat
      • 4.1 Scene one by one master one migration from the library from the structure
      • 4.2 Scene 21 Primary One migrating the specified library from the structure
      • 4.3 Scene 31 Primary One from the structure of the bilateral migration of the specified library
      • 4.4 Scene 41 Master one from the structure complete migration master and slave
      • 4.5 Scene Five double main structure transfer across engine room
      • 4.6 Scene Six multi-instance cross-room migration
    • Five things to note
    • Six tricks
    • Vii. Summary
Second why migrate

MySQL migration is a task in the daily maintenance of DBAs. The original meaning of the migration is to remove the actual objects and ensure the integrity and continuity of the object. like the soft sand, two innocent children, to move a pile of sand to other places, casting the heart of the castle of longing.

In the production environment, the following conditions need to be migrated, as follows:

    • There is not enough disk space. For example, some old projects, the chosen model does not necessarily apply to the database. Over time, hard drives are likely to be in short supply;
    • bottlenecks in the business. For example, the project uses a single machine to undertake all the reading and writing business, business pressure increased, overwhelmed. If the IO pressure is in an acceptable range, a read-write separation scheme is used;
    • machine bottlenecks. machine bottleneck mainly in the disk IO capability, memory, CPU, at this time in addition to the bottleneck to do some optimization, choose migration is a good solution;
    • Project transformation. some projects of the database exist across the computer room situation, may be in different room to add nodes, or the machine from one computer room to another room. For example, different businesses share the same server, in order to alleviate the server pressure and ease of maintenance, will also do the migration.

In a word, migration is a last resort. Implement the migration effort to keep the business running smoothly and continuously.

Three MySQL migration Scenarios Overview

MySQL migration is nothing more than work around the data, and then continue to extend, nothing more than to ensure that the business smooth and continuous operation of the premise of doing backup recovery. The problem is how to restore the backup quickly and safely.

On the one hand, backup . For each master node from the node or the standby node, there is a backup. This backup may be full, and may be an incremental backup. The way to back up online might be to use mysqldump, possibly xtrabackup, or maybe mydumper. We can use mysqldump for backups of small-capacity (below 10GB) databases. But for high-capacity databases (hundreds of GB or TB), we can't use mysqldump backups, which, on the one hand, generate locks; On the other hand, it takes too long. In this case, you can choose Xtrabackup or copy the data directory directly. Direct copy of Data Catalog method, different machine transmission can use rsync, time-consuming and network-related. Using Xtrabackup, time consuming is primarily in backup and network transmission. This is the best way to get backups if you have a backup file that is fully prepared or that specifies a library. If the standby can allow the service to be stopped, copying the data directory directly is the quickest method. If the repository does not allow the service to stop, we can use Xtrabackup (the InnoDB table is not locked), which is the best compromise for completing the backup.

On the other hand, recovery . For small-capacity (10GB or less) database backup files, we can import directly. Recovering from a large-capacity database (hundreds of GB or TB level) is not difficult when you get the backup file to the local machine. The specific recovery method can refer to section fourth.

Four MySQL Migration Combat

We figure out why the migration is going to be done, and how the migration will be done, and then look at how the production environment operates. Different application scenarios, there are different solutions.

Before you read the actual practice, assume and the reader have the following conventions:

    1. In order to protect privacy, the server IP information in this paper is processed;
    2. If the server in the same room, with the server IP D segment instead of the server, the specific IP please refer to the architecture diagram;
    3. If the server in different room, with the server IP of the C segment and D segment instead of the server, the specific IP please refer to the architecture diagram;
    4. Each scenario gives a method, but does not give details of what commands to execute at each step, because on the one hand, this causes the article to be too long, on the other hand, I think that as long as you know the method, the specific approach will be head-on, only depends on the level of knowledge and the ability to access information;
    5. Please refer to section fifth for precautions in the actual combat process.
4.1 Scene one by one master one migration from the library from the structure

Follow from easy to difficult thinking, we start from the simple structure. A project, originally a master one from the structure. 101 is the primary node, and 102 is the slave node. Migrating 102 from node to 103 due to business needs, architecture diagram I. 102 The data capacity from the node is too large to be backed up using the Mysqldump form. and research and development communication, the formation of a consistent plan.


Figure one by one primary one migration from the library schema diagram

This is done as follows:

    • Research and Development will cut 102 of the reading business to the main library;
    • Confirm 102 MySQL Status (mainly look at the PROCESS LIST), observe the machine traffic, confirm the error, stop 102 service from the node;
    • 103 new MySQL instance, after completion, stop the MySQL service, and the entire data directory MV to other places to do backup;
    • Use rsync to copy 102 of the entire MySQL data directory to 103;
    • Copy at the same time, in 101 authorization, so that 103 has the permission to pull Binlog (REPLICATION SLAVE, REPLICATION CLIENT);
    • To copy complete, modify the server_id in the 103 configuration file, do not be consistent with the 102;
    • Start the MySQL instance in 103, note the data file path in the configuration file and the permissions of the data directory;
    • Enter 103 MySQL instance, use SHOW SLAVE status check from the library state, you can see seconds_behind_master in descending;
    • Seconds_behind_master becomes 0, indicating that the synchronization is complete, at this time can be used pt-table-checksum Check 101 and 103 of the data is consistent, but time-consuming, but also has an impact on the primary node, you can work with the development of data consistency validation;
    • and research and development communication, in addition to do data consistency verification, but also need to verify the account permissions, in order to prevent the return of business access error;
    • Finish the above steps, can and research and development coordination, 101 of the portion of the reading business to 103, observe the business status;
    • If the business is not a problem, prove the migration is successful.
4.2 Scene 21 Primary One migrating the specified library from the structure

We know how to move from one master to the next, and then see how to migrate the master and slave nodes at the same time. A single library is too stressful to manage due to the simultaneous access of the same server to different businesses. Therefore, the intention is to migrate the primary node 101 and from node 102 simultaneously to the new machines 103 and 104,103 to act as the master node, and 104 to act as slave node, schema two. This migration only requires migrating the specified libraries, which are not too large and can guarantee that the data is not real-time.


Figure 21 Primary one migration from structure to specified library schema diagram

The specific practice is as follows:

    • 103 and 104 Create a new instance, build the master-slave relationship, at this time the main node and the slave node is unloaded;
    • 102 Export data, the correct way is to configure the timing task, in the business of low peaks to do export operations, here is the choice of mysqldump;
    • 102 Collect the required accounts and permissions for the specified library;
    • 102 Export data, using rsync transmission to 103, if necessary to do the compression operation;
    • 103 Import data, the data will be automatically synchronized to 104, monitoring Server status and MySQL status;
    • 103 Import completed, 104 synchronization completed, 103 according to 102 collected account authorization, after completion, notify the research and development inspection data and account permissions;
    • After the completion of the above, can research and development collaboration, the 101 and 102 of the business migration to 103 and 104, observing the business status;
    • If the business is not a problem, prove the migration is successful.
4.3 Scene 31 Primary One from the structure of the bilateral migration of the specified library

Next look at a master one from the structure of the bilateral migration of the specified library how to do. The same is because of business sharing, resulting in server pressure, management confusion. Therefore, the intention is to migrate the primary node 101 and from node 102 to the new Machine 103, 104, 105, 106,103 as the primary node of 104, 104 as the slave node of 103, 105 as the primary node of 106, 106 as the slave node of 105, the schema diagram three. This migration only requires migrating the specified libraries, which are not too large and can guarantee that the data is not real-time. As we can see, this migration is similar to scenario two, with two migrations.


Figure 31 The primary one from the structure of the bilateral migration of the specified library architecture diagram

The specific practice is as follows:

    • 103 and 104 Create a new instance, build the master-slave relationship, at this time the main node and the slave node is unloaded;
    • 102 Export 103 required to specify the library data, the correct way is to configure the scheduled task, in the business of low peaks to do export operations, here is the choice of mysqldump;
    • 102 Collect 103 required accounts and permissions for the specified library;
    • 102 Export 103 required to complete the specified library data, using rsync transmission to 103, when necessary to do the compression operation;
    • 103 Import data, the data will be automatically synchronized to 104, monitoring Server status and MySQL status;
    • 103 Import completed, 104 synchronization completed, 103 according to 102 collected account authorization, after completion, notify the research and development inspection data and account permissions;
    • After the completion of the above, and research and development collaboration, 101 and 102 of the business migration to 103 and 104, observing the business status;
    • 105 and 106 Create a new instance, build the master-slave relationship, at this time the main node and the slave node is unloaded;
    • 102 Export 105 required to specify the library data, the correct way is to configure the scheduled task, in the business of low peaks to do export operations, here is the choice of mysqldump;
    • 102 Collect 105 required accounts and permissions for the specified library;
    • 102 Export 105 required to complete the specified library data, using rsync transmission to 105, when necessary to do the compression operation;
    • 105 Import data, the data will be automatically synchronized to 106, monitoring Server status and MySQL status;
    • 105 Import completed, 106 synchronization completed, 105 according to 102 collected account authorization, after completion, notify the research and development inspection data and account permissions;
    • After the completion of the above, and research and development collaboration, 101 and 102 of the business migration to 105 and 106, observing the business status;
    • If all business is not a problem, prove the migration is successful.
4.4 Scene 41 Master one from the structure complete migration master and slave

Next look at a master one from the structure of the complete migration master-slave how to do. Similar to scenario two, but here is the migration of all libraries. Because of the bottleneck in the 101 master IO, it is intended to migrate the primary node 101 and from node 102 to the new machine 103 and 104,103 as the primary node, and 104 to act as the slave node. After the migration is complete, the previous master node and the slave node are deprecated, and the schema is shown in Figure four. This migration is a full-library migration with large capacity and needs to be guaranteed in real time. This time the migration is special, because the strategy is to replace the new from the library, and then replace the new Main library. So the approach is slightly more complicated.


Figure 41 Master-one migration from the structure to the master-slave architecture diagram

The specific approach is this:

  • Research and Development will cut 102 of the reading business to the main library;
  • Confirm 102 MySQL Status (mainly see PROCESS list,master status), observe machine traffic, confirm error, stop 102 service from node;
  • 104 new MySQL instance, after completion, stop the MySQL service, and the entire data directory MV to other places to do backup, note that here is the operation of 104, that is, the future from the library;
  • Use rsync to copy 102 of the entire MySQL data directory to 104;
  • Copy at the same time, in 101 authorization, so that 104 has the permission to pull Binlog (REPLICATION SLAVE, REPLICATION CLIENT);
  • To copy complete, modify the server_id in the 104 configuration file, do not be consistent with the 102;
  • Start the MySQL instance in 104, note the data file path in the configuration file and the permissions of the data directory;
  • Enter 104 MySQL instance, use SHOW SLAVE status check from the library state, you can see seconds_behind_master in descending;
  • Seconds_behind_master becomes 0, indicating that the synchronization is complete, at this time can be used pt-table-checksum Check 101 and 104 of the data is consistent, but time-consuming, but also has an impact on the primary node, you can work with the development of data consistency validation;
  • In addition to doing data consistency verification, you also need to verify the account permissions, in case of business relocation after access error;
  • and research and development collaboration, the previous 102 from the node read business cut to 104;
  • Using 102 of the data, the 103 becomes 101 from the node, the same way;
  • Next comes the key point where we need to turn 104 into 103 from the library;
    • 104 STOP SLAVE;
    • 103 STOP SLAVE Io_thread;
    • 103 STOP SLAVE Sql_thread, remember Master_log_file and Master_log_pos;
    • 104 START SLAVE UNTIL to the aforementioned master_log_file and Master_log_pos;
    • 104 STOP SLAVE again;
    • 104 RESET SLAVE All clears the configuration information from the library;
    • 103 SHOW MASTER STATUS, remember Master_log_file and Master_log_pos;
    • 103 Authorization to 104 access to the Binlog;
    • 104 Change MASTER to 103;
    • 104 Restart MySQL, since reset SLAVE all, view SLAVE status,master_server_id is still 101 instead of 103;
    • 104 MySQL Restart, SLAVE back automatically restart, at this time to see if Io_thread and Sql_thread is YES;
    • 103 START SLAVE;
    • Now look at the status of 103 and 104, you can find that the previous 104 is 101 from the node, now becomes 103 from the node.
  • The synchronization relationship between 103 and 101 is broken before the business migration;
  • Finish the above steps, can and research and development coordination, the 101 read and write business back to 102, read business cut to 104. It is important to note that at this point both 101 and 103 can be written, you need to ensure that 101 is cut to 103 without writing, you can lock 101 with the FLUSH TABLES with READ Lock, and then the business is cut to 103. Attention, must be business low peak execution, remember;
  • After the switch is complete, observe the business status;
  • If the business is not a problem, prove the migration is successful.
4.5 Scene Five double main structure transfer across engine room

Let's take a look at how the dual main structure migrates across the engine room. A project for disaster-tolerant consideration, the use of the cross-room, using a double-main structure, both sides can write. Because of disk space problems, you need to replace the machine A. The intention is to migrate the primary node 1.101 and from node 1.102 to the new machine 1.103 and 1.104,1.103 as the master node, and 1.104 to act as slave node. The 2.101 and 2.102 of the B-ground remain unchanged, but after the migration is complete, 1.103 and 2.101 are mutually dual masters. Architecture Figure Five. Because it is a double-main structure, both sides write at the same time, if you want to replace the primary node, you must have a node stop service.


Fig. Five The migration architecture diagram of double main structure across the machine room

The specific practice is as follows:

    • 1.103 and 1.104 Create a new instance to build a master-slave relationship, where the primary and slave nodes are unloaded;
    • confirms the 1.102 MySQL status (mainly see PROCESS LIST), observing that master status no longer changes. Observe the machine flow, confirm the error, stop 1.102 service from the node;
    • 1.103 Create a new MySQL instance, stop the MySQL service, and make a backup of the entire data directory MV elsewhere;
    • will 1.102 of the entire my The SQL Data Catalog uses rsync to copy to 1.103;
    • is copied at the same time, 1.101 authorized, so that 1.103 has the permission to pull Binlog (REPLICATION SLAVE, REPLICATION CLIENT);
    • to be copied, modify the server_id in the 1.103 configuration file, and be careful not to be consistent with 1.102;
    • start the MySQL instance in 1.103, note the data file path in the configuration file and the permissions of the data directory;
    • Enter 1.103 MySQL instance, use SHOW SLAVE status check from the library state, you can see seconds_behind_master in descending;
    • After the
    • seconds_behind_master becomes 0, it means that the synchronization is complete, at which time the data of 1.101 and 1.103 can be checked with pt-table-checksum, but it is time consuming and has an impact on the primary node. Data consistency validation can be performed with development;
    • We use the same approach to make 1.104 from the library;
    • and research and development communication, in addition to do data consistency verification, but also need to verify the account permissions, in case of business relocation after the access error;
    • li> at this time, we have to do is to change 1.103 into 2.101 from the library, the specific practice can refer to scene four;
    • It should be noted that 1.103 of the odd configuration needs to be consistent with 1.101;
    • After completing the above steps, you can coordinate with the research and development, 1.10 1 of the Read and write business cut to 1.103, cutting 1.102 of the reading business to 1.104. Observe the business status;
    • If the business is not a problem, prove that the migration was successful.
4.6 Scene Six multi-instance cross-room migration

Next we look at the multi-instance cross-room migration certificate to do. For each machine instance relationship, we can refer to figure six. The purpose of this migration is to do data repair. Create 7938 and 7939 instances on 2.117, replacing instances of previous data exceptions. For business reasons, some libraries are only written in A, and some libraries are only written in B, so there is a case of synchronous filtering.


Figure six multi-instance cross-room migration architecture diagram

The specific practice is as follows:

    • 1.113 for 7936 instances using Innobackupex for data backup, note the need to specify a database, and add slave-info parameters;
    • After the backup is complete, copy the compressed file to 2.117;
    • 2.117 Create the Data directory and the relevant directory related to the configuration file;
    • 2.117 using Innobackupex to recover logs;
    • 2.117 copy data using Innobackupex;
    • 2.117 Modify the configuration file, note the following parameters: replicate-ignore-db, innodb_file_per_table = 1, read_only = 1, server_id;
    • 2.117 changing Data directory permissions;
    • 1.112 authorization, so that 2.117 has the permission to pull Binlog (REPLICATION SLAVE, REPLICATION CLIENT);
    • 2.117 change Maste to 1.112,log FILE and LOG POS reference xtrabackup_slave_info;
    • 2.117 start SLAVE, view from the library status;
    • 2.117 the method of establishing 7939 is similar, but the configuration file needs to specify replicate-wild-do-table;
    • and development of data consistency verification and verification of account permissions, in case of business relocation after access error;
    • After completing the above steps, you can coordinate with the research and development to migrate the corresponding business to 2.117 of 7938 instances and 7939 instances. Observing business status;
    • If the business is not a problem, prove the migration is successful.
Five things to note

After introducing the migration scenarios for different scenarios, you need to pay attention to the following points:

    • Database migration, if the event is involved, remember that the master node opens the Event_scheduler parameter;
    • Regardless of the scenario of the migration, always pay attention to the server state, such as disk space, network jitter, in addition, continuous monitoring of the business is also essential;
    • Change MASTER to the log FILE and log POS remember not to find fault, if the designation is wrong, the result is inconsistent data or build master-slave relationship failure;
    • Execute the script not in the $HOME directory, remember in the data directory;
    • Migration can be automated using scripting, but not self-defeating, and any script will have to be tested;
    • Every command must think twice and follow, each command of the parameter meaning to understand;
    • In a multi-instance environment, turn off MySQL in the form of mysqladmin, and do not turn off the instance being used;
    • From the library remember to put READ_ONLY = 1 Plus, this will avoid many problems;
    • The server_id of each machine must be inconsistent, otherwise there will be a situation of synchronization anomalies;
    • Proper configuration of replicate-ignore-db and replicate-wild-do-table;
    • New instance remember to set innodb_file_per_table to 1, part of the scenario above, because the previous instance of this parameter is 0, resulting in ibdata1 too large, backup and transmission are consuming a lot of time;
    • When compressing data using GZIP, note that GZIP will delete the source files when the compression is complete;
    • All operations must be done from the node or the standby node, if the primary node operation, the master node is likely to be down;
    • Xtrabackup Backup does not lock the InnoDB table, but it locks the MyISAM table. So, before the operation, remember to check whether the table under the current database has the MyISAM storage engine, if so, either alone, or change the engine of the table.
Six tricks

In the MySQL migration combat, there are the following techniques to use:

    • Any migration log FILE is subject to relay_master_log_file (synchronizing the Binlog log name on master), and log POS is Exec_master_log_pos (synchronizing the POS point of the current Binlog log) as Quasi
    • Using rsync to copy data, can be combined with expect, nohup use, is definitely a wonderful combination;
    • While using Innobackupex to back up data, you can use gzip to compress;
    • In the use of Innobackupex backup data, you can add--slave-info parameters, easy to do from the library;
    • When using Innobackupex to back up data, you can add--throttle parameters, limit IO, and reduce the impact on your business. You can also add--parallel=n parameters to speed up backups, but note that the--parallel parameter is not valid with the tar stream compression;
    • To do the backup and recovery of the data, you can make a list of to-do items, draw a process, and then prepare the command to be executed in advance;
    • Local Quick Copy folder, there is a good way to use rsync, plus the following parameters:-avhw--no-compress--progress;
    • You can use DD to quickly copy data between different partitions. Or, in a more reliable way, back up to the hard disk and put it on the server. There are more absolute, direct express hard drive.
Vii. Summary

This article from why to migrate to talk about, then said the migration plan, and then explained the different scenarios of the migration combat, finally gave the attention and practical skills. Summed up, also the following points:

First, the purpose of the migration is to keep the business running smoothly and continuously;
Second, the core of migration is how to continue the master-slave synchronization, we need to find a solution between different servers and different businesses;
Third, the business switching needs to consider the different MySQL server permissions problem, need to consider the different machine read and write separation sequence and master-slave relationship, need to consider the impact of cross-room calls on the business.

In the process of implementing the migration, readers can refer to the ideas provided in this article. But how to make sure that every operation is running correctly is a need to think twice.

On the other hand,"the most important thing to prove your ability is to keep everything in your control." "

Originally from: https://dbarobin.com/2015/09/15/migration-of-mysql-on-different-scenes/

MySQL migration scenarios under different scenarios

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.