How does Facebook back up MySQL? Facebook users create a large amount of data every day. To ensure reliable data storage, Facebook backs up data every day. by changing the original logical backup to custom Physical backup, the backup speed is significantly improved. Facebook users create a large amount of data every day. To ensure reliable data storage, we
How does Facebook back up MySQL? Facebook users create a large amount of data every day. To ensure reliable data storage, Facebook backs up data every day. by changing the original logical backup to custom Physical backup, the backup speed is significantly improved. Facebook users create a large amount of data every day. To ensure reliable data storage, we
How does Facebook back up MySQL? Facebook users create a large amount of data every day. To ensure reliable data storage, Facebook backs up data every day. by changing the original logical backup to custom Physical backup, the backup speed is significantly improved.
Facebook users create a large amount of data every day. To ensure reliable data storage, we back up data every day. we changed the original logical backup to a customized physical backup, significantly improving the backup speed (without increasing the size ).
From mysqldump to Xtrabackup
We use mysqldump for daily database backup, and mysqldump for logical backup of data, just as the application accesses the database, mysqldump reads a table from the database using SQL statements, save the table structure and data to a text file. the biggest problem with mysqldump is that the speed is too slow (it usually takes 24 hours or even longer for some of our large databases ), in addition, reading data using SQL statements may cause random disk reading, which may increase the load on the host and affect the performance. for a long time, we can run multiple instances for concurrent backup, which can shorten the backup time, but will cause more load and affect the host performance.
Another feasible backup method is physical backup (binary backup), which reads database disk files at the operating system level rather than SQL statements. in this case, the data in the backup process cannot be consistent with the transaction during SQL reading. only when the backup data files are restored in the database will they be consistent. This is similar to the case when the database is down and restarted.
We modified and enhanced Xtrabackup to meet our additional needs:
1. Quick table-level Restoration
2. Enhanced full and Incremental Backup
3. Support hybrid Incremental Backup
Xtrabackup supports Incremental backup, that is, the data changed since the last full backup. in this way, we can reduce the backup space (for example, one Incremental Backup every day and one full backup every week ). xtrabackup also supports multi-level Incremental backup, but we do not use it to avoid complexity.
1. Table-level Restoration
We wrote a PHP script to read and restore the specified table from the binary backup file. currently, this script cannot read information from the backup file to create the table structure. Therefore, a corresponding empty table must be prepared in advance. we have modified Xtrabackup to support this tool. this modification supports Xtrabackup to import and export a single table. restoring a single table is much faster than restoring a full table, because you only need to read the information of the corresponding table from the file.
2. Adjust full and incremental replication
Fb is a user of the early Incremental backup function of Xtrabackup. At first, the Incremental backup of Xtrabackup does not work for some databases with large tables. Later, we solved these problems together with percona.
Xtrabackup only supports local Incremental backup. That is to say, the Incremental backup file must be on the same host as MySQL. we can modify it to support remote Incremental backup, that is, to send the backup data to the remote host at the same time by using a similar pipeline ,. if Incremental backup is performed locally and then transmitted to the remote host through the network, it is not desirable because local write operations are greatly increased.
Xtrabackup reads database files with 1 MB as one chunk. We found that when 8 Mb is used, the Incremental Backup speed is doubled, and the full backup speed is 40% faster.
3. Make Incremental Backup A real Incremental Backup
The Incremental backup of Xtrabackup reads each page of the database to determine which pages have changed. we created a page tracker to track the modified page by reading the transaction log and using bitmap of each table ,. in this way, we can track which pages have changed and not changed, so that we can only read those changed pages. we call this real Incremental backup.
However, we find that this real Incremental backup is slower than normal Incremental backup .. this is because the normal Incremental Backup uses 8 Mb chunk to read the file, and the actual Incremental Backup read the file size is not fixed, from 16 KB (the size of a page in INNODB) to 8 MB, depending on the number of consecutive pages that have been modified. therefore, in many of our scenarios (since the last full backup, the page is about 10%-30% modified), Real Incremental backup takes more IO calls than normal Incremental backup.
Because we have made improvements, we have a hybrid Incremental backup to reduce the number of I/O operations by avoiding reading unmodified pages. In our scenario, this hybrid Incremental Backup reduces I/O by 20%-30%, and the I/O size ranges from 16 KB to 8 Mb.
The following table describes the results of processing about GB of data using the improved method. due to the speed of mysqldump, mysqldump only runs on a few databases. We use gzip to compress the results of mysqldump, which is slow but has a high compression rate.
QPress is used to compress binary backup, which is much faster than gzip, but with lower compression efficiency. because we often perform Incremental backup and less full backup, the space required for the entire binary backup is similar to that required for mysqldump.
Http://www.facebook.com/note.php? Note_id = 10150098033318920