Problems with insertions
When PHP is manipulating large amounts of data, the solution I think of IS as follows
Method One: Construct the SQL statement in the Foreach Loop traversal and insert the database
INSERT into XXX values (xxx,xxx,xxx)
Method Two: The Foreach loop iterates through the construction SQL statement, and finally inserts the
Insert into XXX (FIELD1,FIELD2,FIELD3) VALUES (xxx1,xxx2,xxx3), (XXX1,XXX2,XXX3)
If you want to determine if the inserted data exists in the database at the time of insertion, do you want to do a select operation once every time you insert it, so that it is inefficient? How to optimize clams?
Issues with updates
In fact, similar to the above insert, before update, select, if there is update, does not exist on the insert, so the SQL statement is still many, how to optimize it?
Reply content:
Problems with insertions
When PHP is manipulating large amounts of data, the solution I think of IS as follows
Method One: Construct the SQL statement in the Foreach Loop traversal and insert the database
INSERT into XXX values (xxx,xxx,xxx)
Method Two: The Foreach loop iterates through the construction SQL statement, and finally inserts the
Insert into XXX (FIELD1,FIELD2,FIELD3) VALUES (xxx1,xxx2,xxx3), (XXX1,XXX2,XXX3)
If you want to determine if the inserted data exists in the database at the time of insertion, do you want to do a select operation once every time you insert it, so that it is inefficient? How to optimize clams?
Issues with updates
In fact, similar to the above insert, before update, select, if there is update, does not exist on the insert, so the SQL statement is still many, how to optimize it?
1-mysql also has a syntax of REPLACE into, which exists on the update, otherwise inserted
2-mysql also has a syntax of INSERT into ... On DUPLICATE key update, there is a unique key violation to update
3-The actual development of large quantities of data insertion is very rare, at least 1000 of the following is certainly not a large number of batches, so if you are trying to save money, usually you will verify that a lazy 4 hours of idle saving caused by the trouble enough for you to toss 8 hours of this prophecy.
Usually high-volume data insertions occur when data is imported from the old database, but this kind of import is usually only once, so it is not too serious, other such as the import of data from the uploaded CSV file needs to see the specific business logic, the more common is to use Try/catch to insert, the failure of the data display , let the user confirm the overwrite, and then update.
1) If you can ensure that insert does not duplicate data, it must be more appropriate insert
2) Index ~, proper index really helps to improve performance
3) The use of MySQL batch import, the performance improvement has certain help, but the disadvantage is that data may be lost.
4) The automatic Commit=true is the transaction closed, each submitted a number of (for example, 1W Records) after the centralized commit once, the speed can be greatly improved, the configuration of the machine is almost better 1W QPS is also very easy
5) New design MySQL library, read/write separation, do cluster, on SSD ...
REPLACE exists then delete insert or DUPLICATE exists update
You can use replace to solve the troubles of update and insert
On duplicate key update use this to Jianjian a unique index for every word that you don't want to repeat, so you don't have to check to see if you've chosen to insert or update. He can do it. The statements following the update are automatically executed when you repeat
Do not need to use replace and duplicate bar, you can try:
Turns on the transaction, loops the INSERT, and updates if the insert fails.
query('SET AUTOCOMMIT=0');$db->query('START TRANSACTION');//开始循环if(!$db->query('INSERT INTO posts(id, post_title, post_content) VALUES(1,"title_1","content_1")')) { $db->query('UPDATE posts SET post_title = "title_1", post_content = "content_1" WHERE id = 1');}//插入失败,或者没有AUTO_INCREMENT字段,或者不是INSERT语句,insert_id为0.echo $db->insert_id;$db->query('COMMIT');$db->query('SET AUTOCOMMIT=1');