A simple way to backup MySQL big table

Source: Internet
Author: User

MySQL Big table backup is a common problem, the following is a simple way to introduce a MySQL big table backup, I hope you learn MySQL big table backup can be helpful.

The big table here is more than 4G above the table, I have seen a maximum of more than 60 g of a single table, for such a table every day a full preparation can be said to be a very painful thing.
Then there is no way to achieve a full-plus incremental backup.
The answer is of course there is.

In the normal environment can be used with full-binlog plus storage.
Most of these environments can be backed up with a single slave.

Ideas:
First stop slave synchronization, refresh buffer, for InnoDB if you want to directly copy also need to put innodb_max_dirty_pages_pct this value to zero, and then perform a flush tables;
You can have CP. If it is dump out can do this.

This scheme is also relatively perfect at present, but a large concurrency of the application of the day Binlog may be able to reach 50g-60g, such a system open Binlog can be said to the system's IO performance and overall performance has an early impact.

Another option is to make a backup based on the changes in the data on the table.
Main idea: Fully prepared and logical backup.
Logical backup: When data is inserted, the trigger is written to another table at the same time, and when the data is updated, we also record the updated data condition to another table.
When there is a delete operation, just record it, delete the main build ID on the line.

Example:
Tables to back up:

    1. CREATE TABLE ' wubx ' (' id ' int (one) ' Not NULL auto_increment,
    2. ' user_id ' int (one) not NULL default ' 0 ',
    3. ' friend_id ' int (one) not NULL default ' 0 ',
    4. ' dir_id ' int (one) not NULL default ' 0 ',
    5. ' Created ' int (one) not NULL default ' 0 ',
    6. UNIQUE KEY ' id ' (' id ')) engine=InnoDB DEFAULT charset=UTF8;

For this table we need to build a table that records new data changes:

    1. MySQL> CREATE table Wubx_ii like WUBX;
    2. Query OK, 0 rows Affected (0.00 sec)
    3. MySQL> CREATE table wubx_uu like WUBX;
    4. Query OK, 0 rows Affected (0.00 sec)
    5. MySQL> CREATE TABLE wubx_dd (id int (11));
    6. Query OK, 0 rows Affected (0.00 sec)

Set up the corresponding trigger-range device

  1. To record insert operations:
  2. Delimiter//
  3. Create trigger Wubx_ii after insert in WUBX for each row begin insert into Wubx_ii set id=new.id,user_id=N ew.user_id,friend_id=new.friend_id,dir_id=new.dir_id,created=new.created; end//
  4. To record the update operation:
  5. Create trigger Wubx_uu after update on WUBX for each row begin replace into Wubx_uu set id=new.id,user_id= new.user_id,friend_id=new.friend_id,dir_id=new.dir_id,created=new.created; end//
  6. Record the deleted action:
  7. Create trigger WUBX_DD after delete in wubx for each row begin insert into WUBX_DD values (old.id); end//
  8. delimiter;

Operation:
First back up the data in the original table WUBX:
For

  1. INSERT into WUBX values (", 1,10,1,1198464252);
  2. INSERT into WUBX values (", 1,11,1,1198464252);
  3. INSERT into WUBX values (", 1,2,1,1198464252);
  4. INSERT into WUBX values (", 2,10,1,1198464252);
  5. INSERT into WUBX values (", 2,12,1,1198464252);
  6. INSERT into WUBX values (", 3,12,1,1198464252);
  7. Update WUBX set dir_id=5 where user_id=3;
  8. Update WUBX set dir_id=4 where user_id=3;
  9. Delete from wubx where user_id=2 and friend_id=12;

Now to implement incremental backups:
To remove the insert operation:

    1. Mysql-e "Select Concat (' replace into WUBX set id= ', id, ',user_id= ', user_id, ',friend_id= ', friend_id, ' , Dir_id= ', dir_id, ',created= ', created, '; ') from Wubx_ii; " >>backup_ii.sql

To remove the update operation:

    1. Mysql-e "Select concat (' Update wubx set user_id= ', user_id, ',friend_id= ', friend_id, ',dir_id= ', dir_id, ', created= ', created, ' where id= ', id, '; ') from Wubx_uu; " >>backup_uu.sql

To remove the delete operation:

    1. Mysql-e "Select concat (' Delete from wubx where id= ', id, '; ') From Wubx_dd ">>backup_dd.sql

This makes it easy to restore to the current recovery point using these logical backups plus the complete backup. Not shown here.

This operation is best done with a program, when the ROM backup, make a mark to clear the backup of the data, to ensure that the logical record table is less than the amount of data is correct.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.