How to delete repeated rows of some fields in a big data file in Linux
Source: Internet
Author: User
No suitable tool was found in linux. stream processing tools such as sed/gawk can only process one row but cannot find rows with duplicate fields. It seems that I had to use a python program, and suddenly remembered to use mysql, so I made a data collection program recently written by Qian Kun and moved it to generate a file containing more than million lines of data, the data is composed of four fields. you need to delete the rows with duplicate fields as required. you cannot find a proper tool in linux, stream processing tools such as sed and gawk can only process one row and cannot find rows with duplicate fields. It seems that I had to use a python program, and suddenly remembered to use mysql, so I made a big shift:
1. use mysqlimport -- local dbname data.txt to import data to the table. The table name must be consistent with the file name.
2. execute the following SQL statement (the unique field must be uniqfield)
Copy codeThe code is as follows:
Use dbname;
Alter table tablename add rowid int auto_increment not null;
Create table t select min (rowid) as rowid from tablename group by uniqfield;
Create table t2 select tablename. * from tablename, t where tablename. rowid = t. rowid;
Drop table tablename;
Rename table t2 to tablename;
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.