Ways to prevent Mysql from repeatedly inserting records _mysql

Source: Internet
Author: User
Tags getdate one table

There are many ways to prevent MySQL from repeatedly inserting records, often ignore,replace,on DUPLICATE KEY UPDATE, which we can also judge in PHP.

programme I: using the Ignore keyword

If the uniqueness of the record is distinguished by the primary key primary or unique index unique, it is possible to avoid duplicate insertion records:

The code is as follows:

Copy Code code as follows:
INSERT IGNORE into ' table_name ' (' email ', ' phone ', ' user_id ') VALUES (' test9@163.com ', ' 99999 ', ' 9999 ');

This will be ignored when duplicate records are executed, and the number 0 is returned after execution.

Another application is to copy the table and avoid duplicate records: The code is as follows:

Copy Code code as follows:
INSERT IGNORE into ' table_1 ' (' name ') SELECT ' name ' from ' table_2 ';

Programme II: Use replace

Syntax format:

The code is as follows:

Copy Code code as follows:
REPLACE into ' table_name ' (' col_name ', ...) VALUES (...); REPLACE into ' table_name ' (' col_name ', ...) SELECT ...; REPLACE into ' table_name ' SET ' col_name ' = ' value ',

... Algorithm Description:

The run of replace is similar to insert, but if the old record has the same value as the new record, the old record is deleted before the new record is inserted, namely:

Try inserting a new row into the table

When an insert fails because of a duplicate keyword error for a primary key or a unique keyword: To delete a conflicting row with duplicate key values from a table again try to insert a new row into the table the criteria for determining that old records have the same value as new records is that the table has a primary key or a unique index, otherwise There is no point in using a replace statement. The statement is the same as the insert, because no indexes are used to determine whether the new row replicates other rows. Return value: The Replace statement returns a number that indicates the number of rows affected. This number is the number of rows deleted and inserted, and the number of rows affected can easily determine whether replace has added only one row, or if replace has also replaced other rows: check to see if the number is 1 (added) or larger (replace). Example: # Eg: (phone field is unique index)

The code is as follows:

Copy Code code as follows:
REPLACE into ' table_name ' (' email ', ' phone ', ' user_id ') VALUES (' test569 ', ' 99999 ', ' 123 ');

In addition, this can be done in SQL Server:

The code is as follows:

Copy Code code as follows:
If not EXISTS (select phone from t where phone= ' 1 ') inserts into T (phone, Update_time) VALUES (' 1 ', GETDATE ()) Else up Date T Set update_time = GETDATE () where phone= ' 1 '

Programme III: On DUPLICATE KEY UPDATE

As written above, you can also add the on DUPLICATE KEY Update method after insert INTO ... If you specify on DUPLICATE key update and the insert row causes duplicate values to occur in a unique index or primary key, the old row update is executed. For example, if column A is defined as unique and contains a value of 1, the following two statements have the same effect:

The code is as follows:

Copy Code code as follows:
INSERT into ' table ' (' A ', ' B ', ' C ') VALUES (1, 2, 3) on DUPLICATE KEY UPDATE ' c ' = ' C ' +1; UPDATE ' table ' SET ' c ' = ' C ' +1 WHERE ' a ' = 1;

If the row is inserted as a new record, the value of the affected row is 1, and if the existing record is updated, the value of the affected row is 2.

NOTE: If column B is also a unique column, insert is equivalent to this UPDATE statement: The code is as follows:

Copy Code code as follows:
UPDATE ' table ' SET ' c ' = ' C ' +1 WHERE ' a ' =1 OR ' B ' =2 LIMIT 1;

If the a=1 OR b=2 matches more than one row, only one row is updated. In general, you should try to avoid using the on DUPLICATE key clause on tables with multiple unique keywords.

You can use the values (col_name) function in the UPDATE clause from the INSERT ... The insert portion of the UPDATE statement references the column value. In other words, if no duplicate keyword conflict occurs, values (col_name) in the update clause can reference the value of the inserted col_name. This function is especially useful for multiple rows of inserts. The VALUES () function is only in the insert ... is meaningful in the UPDATE statement and returns null at other times.

The code is as follows:

Copy Code code as follows:
INSERT into ' table ' (' A ', ' B ', ' C ') VALUES (1, 2, 3), (4, 5, 6) on DUPLICATE KEY UPDATE ' C ' =values (' a ') +values (' B ');

This statement acts the same as the following two statements:

The code is as follows:

Copy Code code as follows:
INSERT into ' table ' (' A ', ' B ', ' C ') VALUES (1, 2, 3) on DUPLICATE KEY UPDATE ' c ' = 3; INSERT into ' table ' (' A ', ' B ', ' C ') VALUES (4, 5, 6) on DUPLICATE KEY UPDATE c=9;

Note: When you use on DUPLICATE KEY update, the delayed option is ignored.

Example:

This example is what I used in the actual project: to import the data from one table into another, the repeatability of the data should be considered (as follows), the only index is: email:

The code is as follows:

Copy Code code as follows:
INSERT into ' table_name1 ' (' title ', ' first_name ', ' last_name ', ' email ', ' phone ', ' user_id ', ' role_id ', ' status ', ' Campaig     n_id ') SELECT ', ', ', ', ' table_name2 '. ' Email ', ' table_name2 '. ' phone ', null, NULL, ' pending ', ' table_name2 ' WHERE ' table_name2 '. ' Status ' = 1 on DUPLICATE KEY UPDATE ' table_name1 '. ' Status ' = ' pending '

Put another example:

The code is as follows:

Copy Code code as follows:
INSERT into ' class ' SELECT * "Class1 ' on DUPLICATE KEY UPDATE ' class '. ' Course ' = ' class1 '. ' Course '

Other key: Delayed as a quick insert, not very concerned about the failure, improve the insertion performance. IGNORE only focus on primary key corresponding record is not present, added without, and ignored.

PHP prevents repeated insertion of record instances

Copy Code code as follows:
<?php $link =mysql_connect (' localhost ', ' root ', ' 1234 '); Get MySQL database connection $username =$_get["name"]; Get data from the client table conveys $q = "SELECT * from usertable where user_name= ' $username '"; mysql_query ("SET NAMES gb2312"); Avoid the occurrence of Chinese garbled $rs = mysql_query ($q, $link); Query database $num _rows = mysql_num_rows ($RS); The total number of rows that get the results of the query if ($num _rows==0)//fire? Liehuo.net welcome replication, the refusal of malicious collection Liehuo. NET {$exec = INSERT into student (user_name) values ($username); mysql_query ("SET NAMES gb2312"); mysql_query ($exec, $lin k); If there is no such user, the data is inserted into the database (registered user) echo "User Registration Success!" "; } else {echo "This user name already exists, please select the user name again!" "; } ? >

Attach some ways to delete duplicate records

SQL statements for querying and deleting duplicate records

1, look for redundant records in the table, duplicate records are based on a single field (Peopleid) to judge

Copy Code code as follows:
SELECT * from people where Peopleid to (select Peopleid from People GROUP by Peopleid have count (Peopleid) > 1)

2, delete redundant records in the table, duplicate records are based on a single field (Peopleid) to judge, leaving only rowid minimal records

Copy Code code as follows:
Delete from people where Peopleid into (select Peopleid from People GROUP by Peopleid have count (Peopleid) > 1) and row ID not in (select min (rowid) from people GROUP by Peopleid has count (Peopleid) >1)

3. Find redundant records in the table (multiple fields)

Copy Code code as follows:
SELECT * from Vitae a WHERE (A.PEOPLEID,A.SEQ) in (select Peopleid,seq to Vitae GROUP by PEOPLEID,SEQ have Count (*) &G T 1)

4, delete redundant records in the table (multiple fields), leaving only the smallest ROWID records

Copy Code code as follows:
Delete from Vitae a where (A.PEOPLEID,A.SEQ) in (select Peopleid,seq to Vitae GROUP by PEOPLEID,SEQ have count (*) > 1) and rowID (select min (rowid) from Vitae GROUP by PEOPLEID,SEQ have Count (*) >1)

5, look for redundant records in the table (multiple fields), does not contain the smallest ROWID records

Copy Code code as follows:
SELECT * from Vitae a WHERE (A.PEOPLEID,A.SEQ) in (select Peopleid,seq to Vitae GROUP by PEOPLEID,SEQ have Count (*) &G T 1) and rowID (select min (rowid) from Vitae GROUP by PEOPLEID,SEQ have Count (*) >1)

==============================

solved! Created 3 separated queries for each row, working for now!

So and added UNIQUE INDEX to the (Auctionid, order) pair have this workable code:

INSERT IGNORE into
selections
(
selections.auctionid,
selections.order,
Selections.title,
startamount
)
SELECT
auctions.id,
1,
Playera,
0.01
from
auctions, game
WHERE
Auctions. Betfairmark = game. Betfairmarketid
;

INSERT IGNORE into
elections
(
Selections.auctionid,
selections.order,
selections.title ,
startamount
)
SELECT
auctions.id,
2,
Playerb,
0.01
from Auctions, game
where
auctions. Betfairmark = game. Betfairmarketid

The above is the entire content of this article, I hope to help you learn.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.