Environment:
Database version: 5.6
System environment: CentOS 6.8
Copy schema:
Demand:
The table of the InnoDB engine that has 25 million rows of data above the instance C needs to be partitioned, but there is no modification to table tables on instance a.
Questions to consider:
In instance C above the table is partitioned, the partition will not affect the instance a and instance C between the table of the transmission, will not result in inconsistent data or insert failed, or the data inserted after the partition is slower.
Actual operation: All in instance C above the operation
1. Stop slave io_thread stop io_thread and wait for instance C to replay relay log is complete.
Master_log_file = = Relay_master_log_file and Read_master_log_pos = = Exec_master_log_pos When these two expressions are established, the local Relay Log has been re-completed.
2. Logical Backup Tables Table data:
Mysqldump-s/var/lib/mysql/mysql.sock-uroot-p--single-transaction--master-data=2-t--skip-add-drop-table sbtest Sbtest1 > Sbtest1.sql parameter explanation:-T: Do not create TABLE--skip-add-drop-table: Do not drop table operation
The drop table and create table operations are not required to be written to the backup SQL statement at the time of backup
3. Change the table name
Change table name Rename table Sbtest1 to sbtest2 for old tables; This is done in order to make a backup import without changing the backup SQL statement, and in the event of partition failure or at least a backup of the table exists for other reasons.
4. Create an empty table and partition:
CREATE TABLE ' sbtest1 ' (' ID ' bigint () not null auto_increment, ' K ' bigint (a) Not null, ' C ' varchar (a) NOT NULL, ' Pad ' varchar (COLLATE) Utf8mb4_bin not NULL, PRIMARY KEY (' id ')) engine=innodb DEFAULT charset=utf8mb4 collate=utf8mb4 _bin PARTITION by KEY (ID) Partitions 64
The table structure of the new table is consistent with the table structure of the old table, and the only difference is that the area is well advanced.
5. Import logical Backup statements
Mysql-uroot-p Sbtest < Sbtest1.sql
6. Open slave and observe for a period of time after the import is complete
Start slave Io_thread
7.pt-table-checksum Data Consistency detection (can be done without doing)
http://seanlook.com/2015/12/29/mysql_replica_pt-table-checksum/
8. Recommendations:
Because a backup of 25 million rows of data takes a long time, it is recommended to use screen when backing up, and it is feared that the process still exists after you disconnect from the remote.
"MySQL" already has a large amount of data in the table partitioning