Uniq can check repeated columns in text files.
Syntax
Uniq [-cdu] [-f <column position>] [-s <character position>] [-w <character position>] [-- help] [-- version]
[Input file] [output file]
Parameters:
- -C or -- count displays the number of repeated occurrences of the row next to each column.
- -D or -- repeated only displays repeated columns.
- -F <field> or -- skip-fields = <field> ignores the specified field.
- -S <character location> or -- skip-chars = <character location> ignores the specified character.
- -U or -- unique only displays the row and column once.
- -W <character location> or -- check-chars = <character location> specifies the character to be compared.
- -- Help: displays help.
- -- Version: displays the version information.
- [Input file] specifies the sorted text file.
- [Output file] specifies the output file.
Instance
Use the uniq command to delete duplicate lines in the testfile of the 2nd, 5th, and 9th rows with the same behavior. You can use the following command:
Uniq testfile
The original content in testfile is:
$ Cat testfile # original content
Test 30
Test 30
Test 30
Hello 95
Hello 95
Hello 95
Hello 95
Linux 85
Linux 85
After you use the uniq command to delete duplicate rows, the following output is displayed:
$ Uniq testfile # delete the content of duplicate rows
Test 30
Hello 95
Linux 85
Check the file and delete repeated lines in the file, and display the number of repeated lines at the beginning of the row. Run the following command:
Uniq-c testfile
The output is as follows:
$ Uniq-ctestfile # delete the content of duplicate rows
3 test 30 # the preceding number indicates that the row appears three times in total.
4 Hello 95 # the preceding number indicates that this row appears four times.
2 Linux 85 # the preceding number indicates that this row appears twice.