[Help] Tens how to repeat the table?
Have always been tossing the small database, do not know the index, data types and other differences will have a significant impact on efficiency. Not a password leak recently? On the next, import MySQL database, a total of Chice records, only leave the password field, all the other fields are deleted, select, insert and other tests, with the index select the efficiency of the obvious difference, but in the duplication encountered difficulties.
method One:
CREATE TABLE newtable SELECT DISTINCT pwd from oldtable
This looks like the most efficient, but the runtime simply drags the machine to death and the memory runs out for a while.
Method Two:
Get again Delete duplicates (each fetch $num record, my $num=50)
$result = mysql_query ("Select MIN (ID), PWD from tablename WHERE ID between $id and $num GROUP by pwd");
while ($row = Mysql_fetch_row ($result)) {
mysql_query ("DELETE from tablename WHERE id> $row [0] and pwd= ' $row [1] '");
}
$id + = $num;
Through the address bar or cookies, such as passing $id, the efficiency is too low, processing for 100 minutes before the deletion of 30多万条 duplicates
may I ask what I should do to make it more efficient? Thanks
------Solution--------------------
Create a temporary table method good
Before, it is generally recommended that others do so, but not necessarily listen to the small amount of data does not matter
Http://topic.csdn.net/u/20111225/22/7cabedc3-5e9e-42b3-b05b-153ba5a5a67f.html
The operation of resources is necessary, and inevitably ..... Unless you're willing to wait.
------Solution--------------------
2100w, do not know how to add unique efficiency, you can try
SQL code
alter ignore table MyPwd add unique (PWD); ALTER TABLE MYP WD DROP index PWD;
------Solution--------------------
Use a temporary table. Create temporary table ....
------Solution--------------------
Try:
Create a new table and set a unique field. The
exports the SQL file. The
re-source import.
------Solution--------------------
You can build unique keys. Do not index. Repeated direct error ignored.
Select memory is not sufficient to save the disk. And there are distinct. and repeat the comparison. There should be no source fast.
------Solution--------------------
discusses
References:
You can build unique keys. Do not index.
Repeated direct error ignored.
Select memory is not sufficient to save the disk. And there are distinct. and repeat the comparison.
There should be no source fast.
Please see my reply on the 7 floor, if you do not give the PWD character Jianjian index, 7 floor efficiency is very high, 110 seconds to finish processing. is implemented in SQLyog