MySQL import big Data sql file

Source: Internet
Author: User
Tags mysql command line mysql import

Export SQL file

Reasonable use of several parameters during export can greatly speed up the import.

-e uses multi-line insert syntax that includes several values lists;
–max_allowed_packet=xxx the maximum size of the buffer between client/server communication;
–net_buffer_length=xxx TCP/IP and socket communication buffer size, creating a line of length up to Net_buffer_length

Note: Max_allowed_packet and net_buffer_length cannot be larger than the target database configuration, or there may be an error.
Example:

mysql>mysqldump -uroot -p discuz  -e --max_allowed_packet=1048576 --net_buffer_length=16384 > discuz.sql
To view the configuration file path

If you don't know the profile path currently used by MySQL, you can try this:

which mysqld/usr/sbin/mysqld/usr/sbin/mysqld --verbose --help |grep -A 1 ‘Default options‘

As can be seen, the server first reads the/etc/my.cnf file, and if it finds that the file does not exist, then attempts to read from a few subsequent paths in turn.

SHOW STATUS; #服务器状态变量,运行服务器的统计和状态指标SHOW VARIABLES; #服务器系统变量,实际上使用的变量的值SHOW STATUS LIKE  ‘%变量名%‘
Max_allowed_packet

The maximum size of the buffer between the client/server communication and the amount of adjustment can speed up MySQL import data.

show VARIABLES like ‘%max_allowed_packet%‘;
  1. You can edit the my.cnf to modify (under Windows My.ini) and modify it in the [Mysqld] section or in the MySQL server configuration section. (added after [mysqld] max_allowed_packet=16M , save restart MySQL)
    max_allowed_packet = 20M
    If you can't find the MY.CNF, you can pass
    mysql --help | grep my.cnf
    To find the my.cnf file.

    1. Go to MySQL server
      Run in the MySQL command line
      set global max_allowed_packet = 2*1024*1024*10
      Then shut down this MySQL server link and then enter.
      show VARIABLES like ‘%max_allowed_packet%‘;
      See if the next Max_allowed_packet is edited successfully

Innodb_flush_log_at_trx_commit

Configured with 0,1,2 three configurations

    • If the value of Innodb_flush_log_at_trx_commit is 0, log buffer will be brushed to disk every second, and no action is taken when committing the transaction. (execution is performed by MySQL's master thread threads.) The redo log buffers are written to the disk's redo log file (REDO log) every second in the main thread. Whether the transaction has been committed or not. The default log file is Ib_logfile0,ib_logfile1

    • When set to the default value of 1, log buffer is written to the log every time a transaction is committed.

    • If set to 2, each COMMIT transaction writes a log, but does not perform the brush operation. The log file is brushed on a per-second schedule. Note that there is no guarantee that 100% per second will be brushed to disk, depending on the scheduling of the process. Each time the transaction commits, the data is written to the transaction log, where the write is only called by the filesystem, and the file system is cached, so this write does not guarantee that the data has been written to the physical disk

    • The default value of 1 is to ensure complete acid. Of course, you can set this configuration to a value other than 1 in Exchange for higher performance, but you will lose 1 seconds of data when the system crashes. Set to 0, when the mysqld process crashes, it loses the last 1 seconds of the transaction. Set to 2, the last 1 seconds of data will be lost only if the operating system crashes or loses power. InnoDB will ignore this value when doing a recovery.

Therefore, when importing data, you can temporarily set it to 0 for greater performance and faster import.

Accelerating the insertion of data methods

[Footnote] [Footnote2] Bulk Data Loading for InnoDB Tables

SET autocommit=0;SET UNIQUE_CHECKS=0;SET FOREIGN_KEY_CHECKS=0;insert into tablename values (...),(...),(...)SET UNIQUE_CHECKS=1;SET FOREIGN_KEY_CHECKS=1;COMMIT;

innodb_autoinc_lock_modevariable is set to 2

Even insert with all of the above-mentioned strategies, the speed is still not load [local] data infile fast.

[^footnote]: Bulk Data Loading for InnoDB Tables

[^footnote2]:speed of INSERT statements

MySQL import big Data sql file

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.