Mydumper/myloader Usage Details, mydumpermyloader

Source: Internet
Author: User
Tags time zones

Mydumper/myloader Usage Details, mydumpermyloader
Mydumper installation: Installing1. mydumper backup 1. Enable 16 Backup processes to back up all the databases and binary log files to the specified directory./Docker/mydumper/all_backupLowerMydumper-u root-p root123 -- threads = 16-o/docker/mydumper/all_backup -- threads = 16 specifies the number of 16 backup threads enabled, this can be seen in MySQL show processlist. After the backup is complete, each table has two files: one is the table structure file database. table-schema. SQL; the other is the table data file database. table. SQL. In addition, there is a binary log file metadata. $Cat metadataStarted dump at: 10: 57: 24 show master status: Log: mysql-bin.000025 Pos: 681395159 GTID: 6fbc72af-348e-11e7-865c-fa163e5807c3: 1-22, ad9b2529-348d-11e7-bef0-fa163e9e3c14: 1-2663607 show slave status: Host: 10.157.24.87 Log: mysql-bin.000047 Pos: 144635857 GTID: 6fbc72af-348e-11e7-865c-fa163e5807c3: 1-22, ad9b2529-348d-11e7-bef0-fa163e9e3c14: 1-2663607 Finished dump at: 10:58:50 Note: If no output directory is specified: mydumper-u root-p root123 will automatically generate the export-20170615-105920 directory by default (export-year month day-hour minute second)2. Back up all the databases to the specified folder and compress the backup files (this backup and compression process consumes too much time)Mydumper-u root-p root123-c-o/docker/mydumper/all_backup_compress compresses both the table structure file and table data file, and the binary log file remains unchanged, as shown below, the file size is compressed from 17G to 7.7G. Root @ dpsvstadbs05 11: 20: 02: mydumper $Du-sh *17 GBAll_backup7.7 GBAll_backup_compress3. Back up the table structure of all tables in the sampson database. If no data is backed up, go to the specified output directory./Docker/mydumper/sampson-dLowerThe mydumper-u root-p root123-B sampson-d-o/docker/mydumper/sampson-d directory contains only metadata and sampson. A table-schema. SQL file.4. Back up the data of all tables in the sampson database. Do not back up the table structure to the specified output directory./Docker/mydumper/sampson-mLowerThe mydumper-u root-p root123-B sampson-m-o/docker/mydumper/sampson-m directory contains only metadata and sampson. table. SQL file.5. Back up all dsns and t1 tables in the sampson database to the specified output directory./Docker/mydumper/sampson-TLowerMydumper-u root-p root123-B sampson-T dsns, t1-o/docker/mydumper/sampson-T: The/docker/mydumper/sampson-T directory contains the metadata and sampson database creation files, and the backup files of dsns and t1 tables. Note: If the backup needs to run in the background, you can add -- daemon to execute the Export command in the background. II,MyloaderRestore 1. Restore the sampson library from the full backup directory:Myloader-u root-p root123 -- threads = 16 -- database = sampson-d/docker/mydumper/all_backup2. Restore the table structure of the sampson databaseGo to the sampson-d Library:Myloader-u root-p root123 -- database = sampson-d/docker/mydumper/sampson-d3. Restore the sampson library from the full backup folderGo to the samp Library:Myloader-u root-p root123 -- database = samp -- source-db = sampson-d/docker/mydumper/all_backup Note: We recommend that you add the-v 3 parameter during Backup Recovery to display detailed logs. Root @ dpsvstadbs05 12: 13: 35: mydumper $ myloader-u root-p root123-d/docker/mydumper/all_backup -- database = huihui -- source-db = lizhi-v 3 ** Message: 4 threads created ** Message: Creating database 'huihui' ** Message: Creating table 'huihui '. 'my1' ** Message: Creating table 'huihui '. 'my2' ** Message: Creating table 'huihui '. 'my3' ** Message: Creating table 'huihui '. 'my4' ** Message: Creating table 'huihui '. 'my5' ** Message: Creating table 'huihui '. 'my6' ** Message: Creating table 'huihui '. 'my7' ** Message: Creating table 'huihui '. 't1' ** Message: Thread 1 restoring 'libhi '. 'my1' part 0 ** Message: Thread 3 restoring 'libhi '. 'my2' part 0 ** Message: Thread 2 restoring 'libhi '. 'my3' part 0 ** Message: Thread 4 restoring 'libhi '. 'my4' part 0 ** Message: Thread 3 restoring 'libhi '. 'my5' part 0 ** Message: Thread 1 restoring 'libhi '. 'my6' part 0 ** Message: Thread 4 restoring 'libhi '. 'my7' part 0 ** Message: Thread 2 restoring 'libhi '. 't1' part 0 ** Message: Thread 2 shutting down ** Message: Thread 1 shutting down ** Message: Thread 4 shutting down ** Message: Thread 3 shutting down attachment:Mydumper Parameters 

$ Mydumper -- helpUsage: mydumper [OPTION...] multi-threaded MySQL dumpingHelp Options :-?, -- Help Show help optionsApplication Options:-B, -- database to be backed up. If this parameter is not specified, all databases-T and -- tables-list must be backed up, names are separated by commas (,). The directory output by the backup file outputdir is-s, and the number of insert statements generated by the statement-size is 1000000-r by default, -- rows Try to split tables into chunks of this rule rows. this option turns off -- chunk-filesize-F, -- chunk-filesize Split tables into chunks of this output file size. this value is in MB-c, -- compress Compress output files compress the output file-e, -- build-empty-files if the table data is empty, still generate an empty file (by default, there is only a table structure file)-x, -- regex Regular expression for 'db. table 'matching uses regular expressions to match 'db. table '-I, -- ignore-engines Comma delimited list of storage engines ignored by ignore, separated by commas (,)-m, -- no-schemas Do not dump table schemas with the data Do not back up the table structure, only back up data-d, -- no-data Do not dump table data backup table structure, Do not back up data-G, -- triggers Dump triggers backup trigger-E, -- events Dump events-R, -- routines Dump stored procedures and functions backup stored procedures and functions-k, -- no-locks do not use temporary shared read-only locks. Using this option will cause data inconsistency -- less-locking Minimize locking time on InnoDB tables. reduce the apply Time of the InnoDB table lock-l, -- long-query-guard sets the long query timeout time for blocking backup, in seconds, the default value is 60 seconds (mydumper will exit after timeout)-K, -- kill-long-queries Kill long running queries (instead of aborting) to kill long queries (do not Exit) -D, -- daemon Enable daemon mode. The daemon mode continuously performs backup-I and -- snapshot-interval dump snapshots on the database at a certain interval, the default value is 60 s. In daemon mode,-L, -- the log file name used by logfile (the log generated by mydumper ), standard output is used by default -- tz-utc SET TIME_ZONE = '+ 00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved servers with different time zones, defaults to on use -- skip-tz-utc to disable. -- skip-tz-utc -- use-savepoints uses savepoints to reduce the lock time caused by metadata collection, SUPER permission required -- success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn't exist -- lock-all-tables Use lock table for all, instead of FTWRL-U, -- updated-since Use Update_time to dump only tables updated in the last U days -- trx-consistency-only Transactional consistency only-h, -- host connection host name-u, -- user is used to back up the user name-p, -- password user password-P, -- port connection port-S, -- socket: specifies the number of backup threads enabled by the socket file t and threads during socket communication. The default value is 4-C. -- compress-protocol compresses the data-V for mysql communication, -- version: displays the version number-v, -- verbose output information mode, 0 = silent, 1 = errors, 2 = warnings, 3 = info, 2 by default

 

Myloader Parameters

 

$ Myloader -- helpUsage: myloader [OPTION...] multi-threaded MySQL loaderHelp Options :-?, -- Help Show help optionsApplication Options:-d, -- directory Directory of the dump to import the folder that needs to be imported after backup-q, -- queries-per-transaction Number of queries per transaction, default 1000 indicates the Number of queries executed for each transaction. The default value is 1000-o, -- overwrite-tables Drop tables if they already exist if the table to be restored exists, drop the table first and use this parameter. to back up the table structure-B, -- database An alternative database to restore into restore to the specified Database-s, -- source-db database to restore select the database to be restored, restore the database data to the database specified by -- database-e, -- enable-binlog Enable binary logging of the restore data enable binary log-h for data restoration, -- host connection host name-u, -- user is used to back up the user name-p, -- password user password-P, -- port connection port-S, -- socket: specifies the number of backup threads enabled by the socket file t and threads during socket communication. The default value is 4-C. -- compress-protocol compresses the data-V for mysql communication, -- version: displays the version number-v, -- verbose output information mode, 0 = silent, 1 = errors, 2 = warnings, 3 = info, 2 by default

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.