This time, let's take a look at the impact of index, submit frequency on InnoDB table write speed, and see what needs to be noted.
Let's just say a few conclusions:
1, about the impact of the index on the write speed:
A, if there is a self-added key, relatively complete no index of the situation, write speed of about 3.11% increase;
b, if there are self-added key, and two-level index, relatively complete no index, write speed of about 27.37% reduction;
Therefore, the InnoDB table has always had a self added key.
2, about the impact of the submission frequency on the write speed (in the table only from the scene of the list, write data 300,000 rows of data as an example):
A, waiting for all data to be written, the final execution of commit is the most efficient;
b, every 100,000 lines submitted once, relatively one-time submission, about 1.17% slow;
C, every 10,000 lines submitted once, relatively one-time submission, about 3.01% slow;
D, every 1000 lines submitted once, relatively one-time submission, about 23.38% slow;
E, every 100 lines submitted once, relatively one-time submission, about 24.44% slow;
F, every 10 lines submitted once, relatively one-time submission, about 92.78% slow;
G, each row submitted once, relatively one-time submission, about 546.78% slow, that is 5 times times slower;
Therefore, it is best to wait until all the transactions have ended and then submit them in batches instead of committing each SQL execution.
There was once a comparison test mysqldump enable Extended-insert and Export-enabled SQL scripts, which are more than 5 times times slower than the former.
Important: This proposal is not absolutely tenable and depends on the specific scene. If it is a high concurrent online business, it is necessary to submit the transaction as soon as possible to avoid the extension of lock scope. However, in the case of non-high concurrent business scenarios, especially in the context of Data batch import, it is recommended to use a batch submission method.
The following is a detailed test case process, interested students can see:
DROP TABLE IF EXISTS ' mytab ';
CREATE TABLE ' mytab ' (
' id ' int ' unsigned NOT null auto_increment,
' C1 ' int (one) not null DEFAULT ' 0 ',
' C2 ' Int (one) not null default ' 0 ',
' C3 ' timestamp not null default Current_timestamp on UPDATE current_timestamp,
' C4 ' varchar () Not NULL DEFAULT ",
PRIMARY KEY (' id ')
) Engine=innodb;
DELIMITER $$$
DROP PROCEDURE IF EXISTS ' insert_mytab ';
CREATE PROCEDURE ' Insert_mytab ' (in RowNum int, in commitrate int)
BEGIN
DECLARE i int DEFAULT 0;
SET autocommit = 0;
While I < rownum did INSERT into Mytab (C1, c2, C3,C4) VALUES (FLOOR (rand () *rownum), FLOOR (rand () *rownum), now (), REPEAT (CH AR (ROUND (RAND () *255)), 200); SET i = i+1; /* Submit once per commitrate frequency/if (Commitrate > 0) and (i% commitrate = 0) THEN
commit;
SELECT CONCAT (' commitrate: ', Commitrate, ' in ', I);
End IF;
End while;
/* Finally submit again to ensure the success * *
COMMIT;
SELECT ' all COMMIT; ';
End; $$$
#测试调用 Call
insert_mytab (300000, 1);-Each time one submits call
Insert_mytab (300000, 10);-Every 10 times submit call
Insert_ Mytab (300000, 100); -Every 100 times a submit call
Insert_mytab (300000, 1000);-Every 1000 times a submit call
Insert_mytab (300000, 10000);-Submit call insert every 10,000 times
_mytab (300000, 100000); -Every 100,000 times a submit call
Insert_mytab (300000, 0);-One-time submission
Test time-consuming results compare: