Delete rows with duplicate fields in big data files in Linux

Source: Internet
Author: User

A recently written data collection program generates a file containing more than million rows of data. The data consists of four fields, and the second field must be deleted as required, no suitable tool was found in linux. stream processing tools such as sed/gawk can only process one row but cannot find rows with duplicate fields. It seems that I had to use a python program, and suddenly remembered to use mysql, so I made a big shift:

1. Use mysqlimport -- local dbname data.txt to import data to the table. The table name must be consistent with the file name.
2. Execute the following SQL statement (the unique field must be uniqfield)

Use dbname;

Alter table tablename add rowid int auto_increment not null;

Create table t select min (rowid) as rowid from tablename group by uniqfield;

Create table t2 select tablename. * from tablename, t where tablename. rowid = t. rowid;

Drop table tablename;

Rename table t2 to tablename;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.