Article title: Sorting and deduplication of large files in Linux. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Duplicate row removal
The simple usage is as follows, for example, a file name: happybirthday.txt
Cat happybirthday.txt (display file content)
Happy Birthday to You!
Happy Birthday to You!
Happy Birthday Dear Tux!
Happy Birthday to You!
Cat happybirthday.txt | sort (sort)
Happy Birthday Dear Tux!
Happy Birthday to You!
Happy Birthday to You!
Happy Birthday to You!
Cat happybirthday.txt | sort | uniq (deduplication)
Happy Birthday Dear Tux!
Happy Birthday to You!
Duplicate rows in large files
However, when a large file (such as a file of the G level) is encountered, an error is reported using the preceding command, prompting that the space is insufficient. I tried it. Finally, I used the split command to split the large files into several small files, sorted them separately, and then merged uniq.
Split-B 200 m happybirthday. big Prefix _
Use the-B parameter to cut happybirthday. big. The small file is 200 MB. The Prefix of the cut file name is Prefix _ the cut file name, as shown in
Prefix_aa
Prefix_ AB and then sort
Sort Prefix_aa> Prefix_aa.sort
Sort Prefix_ AB> Prefix_ AB .sort, merge with sort-m, and then uniq
Cat Prefix_aa.sort Prefix_ AB .sort | sort-m | uniq: this is a problem we encountered earlier. That's the case if you remember correctly .~
The sort and uniq commands also have many useful parameters, such as sort-m, uniq-u, and uniq-d. The combination of sort and uniq is very powerful.