When text processing, you often want to delete duplicate rows, here are three ways
First, with Sort+uniq, note, simple uniq is not.
Sort-n Test.txt | Uniq
Second, with the Sort+awk command, note that simply awk also does not, the same reason.
[Email protected] ~]$ uniq--help usage: uniq [options] ... [File] Filters adjacent matching rows from the input file or standard input and writes to the output file or standard output. when no option is attached, the matching row is merged at the first occurrence. The long option must use parameters that are also required for short options. -C,--count //precede each line with a prefix number that indicates the number of occurrences of the corresponding line -D,--repeated //output duplicate rows only -D.,--all-repeated // Only duplicate rows are output, but there are several lines of output of- F,--skip-fields=n //-f ignored number of segments,-F 1 ignores the first paragraph- I,--ignore-case//Case insensitive- s, --skip-chars=n //root-f a bit like, but-S is ignored, the number of characters after the-s 5 ignores the back 5 characters- u,--unique //Remove duplicates, all show up, The root MySQL distinct function is somewhat like -Z,--zero-terminated end lines with 0 byte, not newline -W,--check-chars=n //Do not control the content after the nth character of each line --help //Display this Help information and exit --version //Display version information and exit
[Linux] Removing duplicate rows