Talk about data flow redirection and pipeline commands under Linux

Source: Internet
Author: User
Tags sorts stdin

I. Overview of standard input, standard output, error output

1. Standard input (stdin) is the input of the instruction data, the code is 0, using < or <<, and the default is the keyboard.

2. The standard output (stdout) is the result of the successful return of the instruction execution, the code is 1, the > or >> is used, and the screen is displayed by default.

3. The standard error output (STDERR) is the error message returned by the instruction execution failure, the code is 2, the 2> or 2>> is used, and the default is the screen.

Second, the use of data flow redirection
1. "<": Specify the input data media source (tr ' A-Z ' A-Z ' < file name: The lowercase letter in the specified file becomes uppercase output to the screen)

2. ">", "1>": the right content overlay output to the specified media

3. ">>", "1>>": Append the correct content to the specified media

4. "2>": Outputs the error overlay to the specified media

5. "2>>": Append the error message to the specified media

6. "&>": Overwrite the correct content and error information with the output to the specified medium

7. "&>>": Append the correct content and error information to the specified media

8. Multiple instruction execution
BASH1&&BASH2 (the former executes successfully before the latter is executed)

bash1| | BASH2 (the former executes and fails to perform the latter)

Iii. Overview of Pipeline commands

1. Pipeline commands can filter the execution results of a command, preserving only the information we need. For example, there will be a large number of files in the/etc directory, if using LS is difficult to find the required files, so you can use the pipe command to filter the results of LS, only the required information is retained.

2. Differences in pipeline and data flow redirection:

The word pipe is very vivid, the original data through the pipeline, the pipeline will be a part of the unwanted information filtered out, only the user's attention to the information.

Data flow redirection is where the specified data is displayed, which is displayed by default on the screen, and we can specify that it is output to a file.

3. Pipe command through the pipe symbol "|" Connection

4. Can receive standard input (stdin), such as TAIL/MORE/GREP, etc.

5. Able to receive data from the previous instruction successfully stdin processing

Iv. Use of PIPING commands

1.cut: For clipping, it can cut rows of data by a specified delimiter into a column of columns, and then display only the data for a specific column.

    cut -d ‘分隔符‘ -f nSplits the data by a specific delimiter and displays only the nth column of data.

    cut -c 起始字符的下标-结束字符的下标Select data in a specific range (the subscripts involved in cut start at 1)

2.grep: Keyword search

    grep [-cinv] [--color=auto] ‘关键词‘ 待查找的文件名

-C: Count the number of keywords appearing

-I: Keyword ignores capitalization

-N: Output line number

-V: Reverse selection, which is to find rows that do not contain the keyword

--color=auto: Keyword highlighting

Command | grep [-parameter] ' keyword ' uses a pipeline that outputs the execution results of the previous command to grep and searches for qualifying rows by using GREP's keyword search.

3.sort: Sort

    sort [-参数] 文件

-T: Specify delimiter

-K: Sorting the first few fields after the delimiter is selected

-F: Ignores the case of the selected field when sorting

-B: Remove the space before the selected field

-M: Sorts the selected fields by month (provided that the selected field is the month)

-N: Sorts the selected fields according to the data (provided that the selected field is a number)

-R: Reverse Sort

-U: deduplication, if the selected field is duplicated, remove the duplicate

Command | Sort [-parameter] uses the pipeline to sort the results of the previous command by the specified fields.

4.uniq: Only for pipelines, it can remove exactly the same rows from the result of the previous command execution.

    uniq [-参数] -i:忽略大小写 -c:进行重复行的统计

5.WC: Count the number of words, lines, characters

WC [-parameter] file name

-l list Number of rows

-W list words

-C lists the number of characters

6.tee: Ability to output command execution results to specified files and screens at the same time. Can only be used in combination with piping

    tee [-a] 文件 -a:以追加的形式写入文件。

7.join: Connect two files

This command has nothing to do with pipelines. It is equivalent to a join connection in the database, connecting two of the specified fields in the table with the same rows in the field. Here, it is able to connect the same fields of the specified fields in the two files into one line.

    join [-参数] 文件1 文件2

-T: Field delimiter for two files

-1: Field of the first file

-2: Field of the second file

-I: Ignores the case of the selected field

8.split: This command can cut a large file into several small files.

< Span class= "Ruby" > split [-parameter] large file small file name prefix

< Span class= "Ruby" > -b: Specifies the size of the small file, which needs to be added in units: B, K, M-l: Specifies the number of rows in each small file.

< Span class= "Ruby" > large files will be cut into several small files, and the small file name is: Small file name prefix +aa, small file name prefix +ab, small file name prefix +ac.

      

Talk about data flow redirection and pipeline commands under Linux

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.