Basic shell programming [3]-common tools awk \ sed \ sort \ uniq \ od, awkuniqAwk
Awk is a very useful thing. It is widely used in the presentation and processing of linux system analysis results. You can also use pipelines, input | awk ''| output1
Linux shell -- md5sum, sha1sum, sort, uniq 1. file verification 1. md5sumeg: md5sum filename Note: generate a 128-bit binary, that is, a 32-bit hexadecimal string 1. verify the correctness of the file: md5sum file1 file2> file_sum.md5 next time I
Given a sequenced file, Uniq deletes the duplicate rows and outputs the results to standard output. Uniq is typically used in conjunction with sort to remove duplicate rows from the sort output.Command formatUniq [OPTION] ... [INPUT [OUTPUT]]Command
Sort-u and Uniq can play the role of removing duplicate information, so where is the difference between them?$ cat TestJasonJasonJasonFffffJasonThe following three commands are executed separately1:sort-u TestSort-u TestFffffJason2:uniq Test$uniq
Sort by the second character of the 1th field, if the field is only 1 characters followed by a number, the order of the letters after the normal sort of 2 charactersRepeated lines in a uniq are repeated without repeating rows, such as:1122And the
IntroductionThe Uniq command is a text-to-redo command that can perform deduplication on standard input and text files, and can output the results through stdout, uniq commands are often used with the sort command, and the Uniq command displays only
A few text-processing gadgets: TR, Wc,cut,sort,uniq1. The TR command can replace, compress, and delete characters from standard input. It can turn a set of characters into another set of characters, often used to write graceful single-line commands,
Function: reports or ignores duplicate lines in a file, usually with sort.options:-C count shows the number of occurrences of the row before each column -D repeated, showing only the rows that appear repeatedly - F Skip field ignores comparison of
Line sorting command sort:1. sort command line options:Option description-delimiter between t fields-f when character sorting is ignored, or sort some data based on the domain field-m sorts the sorted input files, merged into a sorted output data
uniq CommandUsed to report or ignore duplicate rows in a file, typically withSortcommand in conjunction with the.GrammarUniq (option) (parameter)Options-C or--count: Displays the number of occurrences of the row next to each column;-D or--repeated:
* Represents 0 or more arbitrary characters[[email protected] ~]# ls *txt11.txt 1.txt 22.txt 2.txt aa.txt a.txt? Represents only one arbitrary characterWhether it's a number or a letter, as long as it matches a single character.[email protected] ~]#
From: http://www.ibm.com/developerworks/cn/linux/l-tip-prompt/l-tiptex6/
Repeated rows usually do not cause problems, but sometimes they do. In this case, you don't have to spend an afternoon preparing filters for them. The uniq command is a
Linux diary: Cut grep sort WC & uniq
After several days, I finally had time to write a Linux blog. Recently I felt a lot of emotion, but I was unable to give up my love for Linux. So I continued my research and didn't want to delay my step forward
1.uniq filtering repetition (can only handle duplicates immediately adjacent to two rows, not adjacent, not processed)Uniq- C 1.txt filtering repeats, counting several repetitions and non-repetition. -C for statistical countingSort 1.txt |uniq-c is
WC Word CountWC [OPTION] ... [FILE] ...-L,--lines shows the number of rows-W,--words displays the number of words -C,--bytes shows the number of bytes- L,--max-line-length prints the length of the longest line. eg650) this.width=650; "src=" http://s3
(1) intersection of two files, and setPrerequisites: No duplicate lines in each file1. Remove the set of two files (duplicate lines are reserved only one copy)Cat File1 File2 | Sort | Uniq > File32. Remove the intersection of two files (leaving only
Shell special Characters
* Any of the characters
? Any one character
#注释字符
\ de-Semantic characters
| pipe character
$ variable prefix,!$ combination, regular inside means end of line
Multiple commands are written to
Cut: Intercepting text-specific fieldsNAMECut-remove sections from all line of files-D,--delimiter=delim (Specify field delimiter, default is space)Use DELIM instead of the TAB for field delimiter-F,--fields=list (Specifies the field to
-N #代表以数字方法排序 If reverse is added to-R-T ': ' #-t specify delimiter-K #指定第几列----------------------------------------------------------------------The text is as
Text processing cat more less head tail sort uniq grep cut jion sed awk################################################cat:concatenate files and print on the standard output display the contents of the file to the normal outputs (display)-E:
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.