A recently written data collection program generates a file containing more than million rows of data. The data consists of four fields, and the second field must be deleted as required, no suitable tool was found in linux. stream processing tools such as sed/gawk can only process one row but cannot find rows with duplicate fields. It seems that I had to use a python program, and suddenly remembered to use mysql, so I made a big shift:
1. Use mysqlimport -- local dbname data.txt to import data to the table. The table name must be consistent with the file name.
2. Execute the following SQL statement (the unique field must be uniqfield)
Use dbname;
Alter table tablename add rowid int auto_increment not null;
Create table t select min (rowid) as rowid from tablename group by uniqfield;
Create table t2 select tablename. * from tablename, t where tablename. rowid = t. rowid;
Drop table tablename;
Rename table t2 to tablename;