Operation principle and implementation of pipeline under Linux

Source: Internet
Author: User

Piping (pipe)

  1. Operation Principle of pipeline

    Piping is the most basic IPC mechanism, created by the pipe function:

    #include <unistd.h>

    int pipe (int filedes[2]);

    Call the pipe function in the kernel to open a buffer for communication, it has a read end and a write end, through the Filedes parameter out to the program two file descriptor, Filedes[0] point to the read end of the pipeline, Filedes[1] point to the write end of the pipeline. The pipeline is like an open file, via read (Filedes[0]), or write (Filedes[1]), which reads and writes data to this file, actually reading and writing the kernel buffer. The pipe function call returned 0 successfully, and the call failed to return-1. The steps for communication are as follows:

    <1> Parent Process Creation pipeline

    The parent process calls pipe to open a pipeline, resulting in two file descriptors pointing at both ends of the pipeline.

    650) this.width=650; "title=" Qq20160717113219.png "src=" http://s5.51cto.com/wyfs02/M01/84/44/wKiom1eK_ T2t0yqtaabwp9y7_ae602.png "alt=" Wkiom1ek_t2t0yqtaabwp9y7_ae602.png "/>

    <2> Parent Process fork out child process

    When the parent process calls fork to create a child process, the child process also has two file descriptors pointing to the same pipe.

    650) this.width=650; "title=" Qq20160717113403.png "src=" http://s3.51cto.com/wyfs02/M02/84/44/wKiom1eK_ Ayzwkhxaacemicxrfu753.png "alt=" Wkiom1ek_ayzwkhxaacemicxrfu753.png "/>

    <3> parent Process Close fd[0], child process off fd[1]

    The parent process closes the pipe read end, and the child process closes the pipe write end. The parent process can write to the pipeline, the child process can be read from the pipeline, the pipeline is implemented by the ring queue, the data flows from the writing end flow from the reading end, that is, to achieve inter-process communication. 650) this.width=650; "title=" Qq20160717113619.png "src=" http://s5.51cto.com/wyfs02/M02/84/44/wKioL1eK_ Ilauijiaactk_ygm8q395.png "alt=" Wkiol1ek_ilauijiaactk_ygm8q395.png "/>

    It can be seen that the creation process in Linux is "fork" from the parent process, and then EXECVE (replace the current program with the program to be executed), instead of specifying its function to run at the time of creation, complete the independent creation, so that the natural process of inheritance, for the implementation of the pipeline provides a great convenience, Because the implementation of the pipeline takes advantage of the feature of the child process inheriting the file descriptor of the parent process.

  2. Special cases when piping is in use

    <1> all file descriptors pointing to the pipe write end are closed (the reference count for the pipe write is equal to 0), while there is still a process reading the data from the read end of the pipeline, then read again after the remaining data in the pipeline is read, and read again will return 0, just like the end of the file.

    <2> the file descriptor pointing to the pipe write end is not closed (the reference count of the pipe write end is greater than 0), and the process that holds the pipe write end does not write data to the pipeline, when there is a process reading the data from the pipe, the remaining data in the pipeline is read, and read will block again. The data is read and returned until the data in the pipeline is readable.

    <3> all the file descriptors that point to the pipe read end are closed (the reference count for the pipe reads equals 0), and the process is written to the write end of the pipeline, and the process receives the signal sigpipe, which usually causes the process to terminate unexpectedly.

    <4> the file descriptor pointing to the pipe read end is not closed (the reference count for the pipe is greater than 0), and the process that holds the pipe read does not read the data from the pipe, and then the process writes the data to the pipe, and then writes again when the pipe is full, blocking until the pipeline has empty positions before writing the data and returning.

  3. The underlying implementation

    View my platform how much can I write with a full pipeline?

This article from the "Chase" blog, reproduced please contact the author!

Operation principle and implementation of pipeline under Linux

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.