I can think of three methods for the moment.
To database processing, database settings UNIQUE
Given to the database for processing, select is executed before each INSERT, but this increases the number of select times as the data grows
To the language processing, the existing data exists in the array, with In_array and so on, but as the data increases the array will become very large
Think of the three treatment methods that are not very good
What kind of plan would you use?
Reply content:
I can think of three methods for the moment.
To database processing, database settings UNIQUE
Given to the database for processing, select is executed before each INSERT, but this increases the number of select times as the data grows
To the language processing, the existing data exists in the array, with In_array and so on, but as the data increases the array will become very large
Think of the three treatment methods that are not very good
What kind of plan would you use?
UNIQUEuse the index, using syntax when inserting INSERT IGNORE
The benefits are:
Performance is not bad, INSERT IGNORE will automatically UNIQUE check in the index for duplicates, because it is an index, so the comparison occurs in memory
Good, successful returns the number of inserted rows, the failure will return 0
This method is still more reliable until your data reaches millions of.
What's the scene?