Simple MySQL large table backup method

Source: Internet
Author: User

MySQL large table backup is a common problem. Next we will introduce you to a simple MySQL large table backup method, hoping to help you learn about MySQL large table backup.

The large table mentioned here is a table larger than 4G. I have seen a single table larger than 60G at present. It is very painful for such a table to have full backup every day.
Is there a way to implement a full backup plus Incremental backup.
Yes, of course.

You can directly use full backup and binlog for saving in the conventional environment.
Most of these environments can be backed up on a server Load balancer instance.

Ideas:
Stop Slave synchronization and refresh the buffer. If Innodb wants to copy the data directly, set innodb_max_dirty_pages_pct to zero, and then execute flush tables once;
Cp. This can be done if Dump is used.

This solution is perfect at present, but the Binlog of a highly concurrent application may reach 50 GB to 60 GB in a day, such a system's Binlog can be said to have an early impact on the system's IO performance and overall performance.

Another solution is to back up data based on the changes in the data on the table.
Main Idea: Full backup and logical backup.
Logical backup: when data is inserted, the trigger is used to write data to another table at the same time. When the data is updated, we record the updated data to another table.
When there is a delete operation, you only need to record the deleted primary ID.

Example:
Table to be backed up:

 
 
  1. CREATE TABLE `wubx` (  `id` int(11) NOT NULL auto_increment,   
  2. `user_id` int(11) NOT NULL default '0',   
  3. `friend_id` int(11) NOT NULL default '0',   
  4. `dir_id` int(11) NOT NULL default '0',   
  5. `created` int(11) NOT NULL default '0',   
  6. UNIQUE KEY `id` (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

For this table, we need to create a table that records new data changes:

 
 
  1. mysql> create table wubx_ii like wubx;  
  2. Query OK, 0 rows affected (0.00 sec)  
  3.  
  4. mysql> create table wubx_uu like wubx;  
  5. Query OK, 0 rows affected (0.00 sec)  
  6. mysql> create table wubx_dd ( id int(11));  
  7. Query OK, 0 rows affected (0.00 sec)  
  8.  

Create a trigger

 
 
  1. Insert record operations:
  2. Delimiter //
  3. Create trigger wubx_ii after insert on wubx for each row begin insert into wubx_ii set id = new. id, user_id = new. user_id, friend_id = new. friend_id, dir_id = new. dir_id, created = new. created; end //
  4. Record update operations:
  5. Create trigger wubx_uu after update on wubx for each row begin replace into wubx_uu set id = new. id, user_id = new. user_id, friend_id = new. friend_id, dir_id = new. dir_id, created = new. created; end //
  6. Record deletion operations:
  7. Create trigger wubx_dd after delete on wubx for each row begin insert into wubx_dd values (old. id); end //
  8.  
  9. Delimiter;
  10.  

Operation:
First, back up the data in the original table wubx:
Run:

 
 
  1. insert into wubx values(”,1,10,1,1198464252);  
  2. insert into wubx values(”,1,11,1,1198464252);  
  3. insert into wubx values(”,1,2,1,1198464252);  
  4. insert into wubx values(”,2,10,1,1198464252);  
  5. insert into wubx values(”,2,12,1,1198464252);  
  6. insert into wubx values(”,3,12,1,1198464252);  
  7. update wubx set dir_id=5 where user_id=3;  
  8. update wubx set dir_id=4 where user_id=3;  
  9. delete from wubx where user_id=2 and friend_id=12;  

Now we need to implement Incremental Backup:
Insert removal:

 
 
  1. mysql -e ” select concat(‘replace into wubx set id=’,id,’,user_id=’,user_id,’,friend_id=’,friend_id,’,dir_id=’,dir_id,’,created=’,created,’;') from wubx_ii;”>>backup_ii.sql  

Retrieve update:

 
 
  1. mysql -e ” select concat(‘update wubx set user_id=’,user_id,’,friend_id=’,friend_id,’,dir_id=’,dir_id,’,created=’,created,’ where id=’,id,’;') from wubx_uu;”>>backup_uu.sql  

Delete removal:

 
 
  1. mysql -e “select concat(‘delete from wubx where id=’,id,’;') from wubx_dd”>>backup_dd.sql  

In this way, it is easy to use the logical backup plus to complete the backup and restore to the current recovery point. This is not demonstrated here.

It is best to use a program to complete this operation. After the backup is completed, make a mark point to identify the backup data to ensure that the data volume in the logical record table is relatively small.
 


 

Implementation of MySQL table sharding

MySQL authorization table usage example

Implementation of MySQL multi-Table Deletion

Advantages and disadvantages of MySQL independent tablespace

Restoration of MySQL MyISAM Table Structure

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.