MySQL Data insertion optimization method _mysql

Source: Internet
Author: User
Tags flush
Generally speaking, the read and write operations in the MyISAM are serial, but when querying and inserting the same table, in order to reduce the frequency of the lock competition, according to the Concurrent_insert settings, MyISAM can be processed in parallel to query and insert:

Concurrent inserts are not allowed when concurrent_insert=0.
When Concurrent_insert=1, a table with no holes is allowed to use concurrent inserts, and the new data is at the end of the data file (default).
When concurrent_insert=2, it is permissible to insert at the end of the data file, regardless of whether the table has holes.

In this way, it's a good deal to set the Concurrent_insert to 2, and the resulting file fragments can be optimized using optimize table syntax on a regular basis.

Max_write_lock_count:

By default, write operations take precedence over the priority of the read operation, even if the read request is sent first, and then the write request is processed, and then the read request is processed as a priority. This creates a problem: once I make several write requests, I plug all the read requests until all the write requests have been processed before I have the opportunity to process the read requests. You can consider using Max_write_lock_count at this point:

Max_write_lock_count=1

With this setting, when the system handles a write operation, it pauses the write operation and gives the read operation an opportunity to execute.

Low-priority-updates:

We can also be a bit more straightforward, directly reduce the priority of the write operation, to give read operations a higher priority.

Low-priority-updates=1

Overall, concurrent_insert=2 is absolutely recommended, as for max_write_lock_count=1 and Low-priority-updates=1, depending on the situation, if you can reduce the priority of the write operation, Use Low-priority-updates=1, otherwise use max_write_lock_count=1.



Set-variable = max_allowed_packet=1m
Set-variable = net_buffer_length=2k



Under the MyISAM engine

1. Try to use INSERT INTO table_name values (...), (...), (...) Insert data in such a form to avoid using inset into table_name values (); Inset into table_name values (); Inset into table_name values ();

2 Add bulk_insert_buffer_size (default 8M)

3 If a non-empty table is used, use ALTER TABLE table_name disable keys, then load data infile, import the end of the execution:

ALTER TABLE table_name enable keys. If it is an empty table, this action is not required because the MyISAM table imports data in an empty table and then builds the Indexs.

4 consider using when inserting data: Insert delayed .... In this way the actual MySQL puts the insert into the queue and makes a relatively concentrated insertion, faster.

5. Use the load data infile nearly 20 times times faster than using insert to use this operation as much as possible.


Under the InnoDB engine

1. Execute set unique_checks=0 before importing data to prevent checking of unique indexes, and then run set Unique_checks=1 after data import is complete.

2. Execute set foreign_key_checks=0 to prevent foreign key checks before importing data, and then execute set foreign_key_checks=1 after data import is complete.

3. Perform set autocommit=0 before importing data to prohibit automatic commit of automatic transactions, and after data import completes, perform set Autocommit=1 to resume autocommit.

Using InnoDB engine tables, physical storage is stored in the order of PK. You cannot use the Disable keys like MyISAM.

Increasing the disk's i/0 on the hardware is good for the insertion speed (so if you are importing and exporting large amounts of data, try to do it on the more NB hardware, reducing the time to finish, and preventing problems).



When a thread executes a delayed statement on a table, if no such handler exists, a processor thread is created to handle all the delayed statements for that table.
The thread checks whether the handler has acquired a delayed lock, and if not, it tells the handler to get it. Even if the other line Chengyu a read or write lock on the table, the delayed lock can be obtained. However, the handler waits for all ALTER TABLE locks or flush tables to ensure that the table structure is up to date.
The thread executes the INSERT statement, but instead of writing the row to the table, it puts a copy of the last row into a queue managed by the processor thread. Any grammatical errors can be detected by threads and reported to the client.
The customer cannot report the number of repetitions or auto_increment values of the resulting row; it cannot get them from the server because inserts are returned before the insert operation completes. If you use the C API, for the same reason, the Mysql_info () function does not return any meaningful things.
The update log has a processor thread update when the row is inserted into the table. In the case of multiple-row inserts, the update log is updated when the first row is inserted.
After each delayed_insert_limit line is written, the processor checks whether any SELECT statements are still incomplete, and if so, allow these statements to be executed before continuing.
When the processor does not have more rows in its queue, the table is unlocked. If the new insert delayed command is not received within delayed_insert_timeout seconds, the processor terminates.
If more than one delayed_queue_size row is not resolved in a particular processor queue, the thread waits until there is room in the queue. This helps ensure that the MYSQLD server does not use all memory for deferred memory queues.
The processor thread will display Delayed_insert in the MySQL process table in the command column. If you execute a flush tables command or kill it with a kill thread_id, it will be killed, however, it first saves all queued rows to the table before exiting. During this time, it will not accept any new insert commands from other threads. If you execute an insert delayed after it, a new processor thread will be created.
Note that this means that if there is an insert delayed processor already running, the insert delayed command has a higher priority than normal insert! Other update commands will have to wait until the insert delay queues, kill the processor thread (with Kill thread_id), or perform flush TABLES.
The following state variables provide information about the Insert Delayed command: Number of Delayed_insert_threads processor threads
Delayed_writes the number of rows written with insert delayed
Not_flushed_delayed_rows waiting for the line number to be written
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.