Because the work needs to carry on the data to weigh, therefore does the record, actually is very small white question ....
In fact, in terms of data deduplication, the best thing is to design the program and database when the data redundancy is considered, do not insert duplicate data. But,,,, this project, if two of the fields are duplicated at the same time, even if redundant, but also need to self-growing ID as the primary key convenient query .... So ... Forget it, I'll finish the data myself.
Because there is a lot of duplicate data, the choice of the Deduplication method is to create a new table by aggregating the function and then rename it. The SQL code is as follows:
CREATE TABLE TMP SELECT * FROM table_name GROUP BY COLUMN1,COLUMN2;
And then, because of the deletion of a lot of data, so the ID is almost 600000 ... It's too unsightly to look at, so the ID has changed from 1. The method is to first delete the ID field, and then add the ID field and set to self-growth, well, finally looked comfortable ... More than 300,000 of the data to the weight after about 18 data, time 70s, estimated tens of millions of data when I should cry, anyway, and then again, continue to think about what good way.
This article is from the "11287515" blog, please be sure to keep this source http://11297515.blog.51cto.com/11287515/1957085
"Problem finishing" MySQL massive data deduplication