Linux Environment interprocess communication: pipelines and well-known pipelines

Source: Internet
Author: User
Tags terminates versions

Pipes and famous pipes

In this series, the author outlines several key means of communication between Linux processes. Among them, pipelines and well-known pipelines are one of the earliest interprocess communication mechanisms, pipelines can be used for communication between relational processes, and a well-known pipeline overcomes the limitations of a pipe without a name, so it allows communication between unrelated processes in addition to the functionality that the pipeline has. Recognizing the rules for reading and writing pipes and well-known pipelines is the key to applying them in a program, on the basis of discussing the communication mechanism of pipelines and famous pipelines in detail, this paper verifies the rules of reading and writing with examples, which is helpful to enhance readers ' perceptual knowledge of reading and writing rules, and also provides examples of application.

1, pipeline overview and related API application

1.1 Piping-related key concepts

The pipeline is one of the original UNIX IPC forms supported by Linux and has the following features:

The pipe is Half-duplex, the data flows only in one direction, and two pipes need to be established to communicate with each other;

Can only be used between a parent-child process or a sibling process (a relational process);

A separate file system: The pipeline is a file for the process at both ends of the pipe, but it is not a common file, it does not belong to a file system, it is a stand-alone file system, and exists only in memory.

Read and write data: a process that writes to the pipe is read by the process on the other side of the pipe. The written content is added to the end of the pipe buffer each time, and data is read from the head of the buffer each time.

1.2 Pipeline creation:

#include int pipe (int fd[2])

The two ends of the pipe created by the function are in the middle of a process, it doesn't make much sense in practice, so a process typically fork a subprocess after a pipeline is created by pipe (), and then communicates between parent and child processes through pipelines (so it's easy to roll out, as long as there are relationships in two processes, The kinship here refers to having a common ancestor, which can be communicated by way of piping.

1.3 Read and write rules for pipelines:

Each end of the pipe can be described by the description Word fd[0] and fd[1], and it should be noted that both ends of the pipe are fixed. That one end can only be used for reading, represented by the descriptive word fd[0], is called the pipe read end, and the other end can only be used for writing, represented by the descriptive word fd[1], which is called the pipe write end. Attempting to read data from a pipe write end, or writing data to a pipe-reading end, will cause an error to occur. General file I/O functions can be used in pipelines, such as close, read, write, and so on.

To read data from a pipeline:

If the write end of the pipe does not exist, it is assumed that the end of the data has been read, and the Read function returns a number of 0 bytes;

When the write end of the pipe is present, if the requested number of bytes is greater than pipe_buf, the number of data bytes existing in the pipeline is returned, and if the number of bytes requested is not greater than pipe_buf, the number of existing data bytes in the pipeline is returned (at which point the amount of data in the pipeline is less than the requested amount of data); At this point, the amount of data in the pipeline is not less than the requested amount of data. Note: (Pipe_buf is defined in Include/linux/limits.h, different kernel versions may vary.) Posix.1 requires a pipe_buf of at least 512 bytes and Red Hat 7.2 as 4096).

About read rule validation for pipelines:

/************** * readtest.c * **************/#include #include

To write data to the pipeline:

When writing data to a pipeline, Linux will not guarantee the atomic nature of the write, the pipeline buffer has an empty area, the write process will attempt to write data to the pipe. If the read process does not read the data in the pipeline buffer, the write operation will remain blocked.

Note: It makes sense to write data to a pipe only when the end of the pipe is read. Otherwise, the process that writes data to the pipeline receives the sifpipe signal from the kernel, which the application can process or ignore (the default action is the application terminates).

Validation of write rules for pipelines 1: The dependence of the writing end on the read end

#include #include main () {int pipe_fd[2]; pid_t

The output is: broken pipe, because the end of the pipe and all of its fork () products have been closed. If you keep the read end in the parent process, that is, after you finish writing the pipe, and then close the parent process's read end, it will also write to pipe, the reader can verify the conclusion. Therefore, when writing data to a pipe, at least one process should exist in which the pipe read end is not closed, or the above error occurs (the pipeline breaks, the process receives the sigpipe signal, and the default action is the process terminates)

Validation of write rules for pipelines 2:linux does not guarantee atomic verification of write pipelines

#include #include #include main (int argc

Conclusion:

Writes to a non atom when the number of writes is less than 4096!

If you change the number of two write bytes in the parent process to 5000, it is easy to draw the following conclusion:

When the amount of data written to the pipe is greater than 4096 bytes, the free space of the buffer is written to the data (padded) until all the data has been written, and if no process reads the data, it blocks.

1.4 Pipe Application Examples:

Example one: for Shell

A pipe can be used for input-output redirection, which directs the output of one command directly to the input of another command. For example, when you type who│wc-l in a shell program (Bourne shell or C shell, etc.), the shell program creates the WHO and the WC two processes and the pipelines between the two processes. Consider the following command line:

$kill-L operation results are attached to one.

$kill-L | The results of the grep sigrtmin run are as follows:

SIGPWR) Sigsys) sigrtmin) sigrtmin+134 sigrtmin+2) sigrtmin

Example two: For inter-process communication with affinity

The following example shows the specific application of the pipe, where the parent process sends some commands to the child process, and the subprocess resolves the command, which is handled accordingly according to the command.

#include #include main () {int pipe_fd[2]; pid_t

1.5 Limitations of pipelines

The main limitations of the pipeline are reflected in its characteristics:

Only one-way data flow is supported;

Can only be used in relationships between processes;

have no name;

The pipeline's buffer is limited (the piping is in memory, and the buffer is allocated a page size when the pipe is created);

The pipeline transmits the unformatted byte stream, which requires that the readout party and the writer of the pipe must agree on the format of the data beforehand, such as how many bytes count as a message (or command, or record), etc.

2, well-known pipeline overview and related API application

2.1 Key concepts related to famous pipelines

A major limitation of pipe application is that it has no name, so it can only be used for relational interprocess communication, which is overcome after a well-known pipeline (named pipe or FIFO) is proposed. The FIFO differs from the pipe in that it provides a path name associated with it and is present in the file system in a FIFO file format. In this way, even if there is no affinity process with the FIFO creation process, as long as the path is accessible, it is possible to communicate with each other through the FIFO (between the processes that can access the path and the FIFO creation process), so that processes that are not related through FIFO can also exchange data. It is noteworthy that FIFO strictly follows the First-in first-out (first), and the reading of pipelines and FIFO always returns data from the beginning, and writes the data to the end. They do not support file positioning operations such as Lseek ().

2.2 The creation of famous pipelines

#include #include int mkfifo (const char * pathname, mode_t mode)

The first parameter of the function is an ordinary pathname, which is the name of the created FIFO. The second argument is the same as the Mode argument in the open () function that opens the normal file. If the first parameter of Mkfifo is an existing pathname, a eexist error is returned, so the typical calling code first checks to see if the error is returned, and if it does, then just call the function that opens the FIFO. General file I/O functions can be used in FIFO, such as close, read, write, and so on.

2.3 Open rules for known pipes

A known pipe has one more open operation than a pipe: open.

Open rule for FIFO:

If the current open operation is to open a FIFO for reading, if a corresponding process opens the FIFO for writing, the current open operation will return successfully, otherwise it may block until a corresponding process opens the FIFO for writing (the current open operation has the blocking flag set); or the successful return ( The current open operation does not have a blocking flag set.

If the current open operation is open FIFO for writing, if a corresponding process is already open for the FIFO to read, the current open operation will return successfully, otherwise it may block until a corresponding process opens the FIFO for reading (the current open operation has the blocking flag set); or returns a Enxio error ( The current open operation does not have a blocking flag set.

Validation of open rules see Appendix 2.

2.4 Read and write rules for famous pipes

Read data from FIFO:

Convention: If a process blocks open FIFO in order to read data from a FIFO, then the read operation in the process is called a read operation with blocking flags set.

If there is a process write-open FIFO, and there is no data in the current FIFO, it will block for read operations with blocking flags set. Returns-1 for no blocking flag read, and the current errno value is Eagain, reminding you to try again later.

For read operations that set blocking flags, there are two reasons for blocking: There is data in the current FIFO, but other processes are reading the data, and there is no data in the FIFO. The reason for blocking is that there are new data writes in FIFO, regardless of the amount of data written in the letter, regardless of how much data the read operation requests.

Read-open blocking flags only exert effect on the first read operation of this process, if there is more than one read operation sequence in this process, the other read operation that will be performed is no longer blocked after the first read operation is awakened and the read operation is completed, even when the read operation is performed (at which point the read operation returns 0).

If there is no process write-open FIFO, the read operation with the blocking flag set will block.

Note: If there is data in the FIFO, the read operation with the blocking flag set will not block because the number of bytes in the FIFO is less than the number of bytes requested to read, at which point the read operation returns the amount of data available in the FIFO.

Write Data to FIFO:

Convention: If a process blocks open FIFO in order to write data to FIFO, it is said that the write operation within the process is set for blocking flags.

For write operations with blocking flags set:

When the amount of data to be written is not greater than PIPE_BUF, Linux guarantees the atomic nature of the write. If the pipe idle buffer is not sufficient to accommodate the number of bytes to be written at this time, go to sleep until a one-time write operation begins when the buffer is capable of accommodating the number of bytes to be written.

When the amount of data to be written is greater than Pipe_buf, Linux will no longer guarantee the atomic nature of the write. When a FIFO buffer has an idle region, the write process attempts to write data to the pipe, which is returned after all the requested data has been written.

For write operations with no blocking flags set:

When the amount of data to be written is greater than Pipe_buf, Linux will no longer guarantee the atomic nature of the write. Writes are returned after all FIFO free buffers are filled.

When the amount of data to be written is not greater than PIPE_BUF, Linux guarantees the atomic nature of the write. If the current FIFO idle buffer is capable of accommodating the number of bytes requested to write, returns successfully after writing, or returns a eagain error if the current FIFO idle buffer cannot accommodate the number of bytes requested to write;

Validation of FIFO read-write rules:

The following provides two FIFO read and write programs, appropriate to adjust the program in very few places or the program's command-line parameters can be a variety of FIFO read and write rules to verify.

Program 1: Write FIFO program

#include #include #include #include

Application Description of the program:

Compile the read program into two different versions:

Blocked Read version: BR

and non-blocking Read version NBR

To compile the write program into two four versions:

Non-blocking and request-write bytes greater than PIPE_BUF version: NBWG

Non-blocking and request-write bytes not greater than Pipe_buf version: Version NbW

Blocked and requested write bytes greater than PIPE_BUF version: BWG

Blocked and request-write bytes not greater than PIPE_BUF version: Version bw

The following will use BR, NBR, W instead of blocking read, non-blocking read in the corresponding program

To verify blocking write operations:

When the amount of data requested to write is greater than that of PIPE_BUF:

NBR 1000

Bwg

When the amount of data requested to write is not greater than the atomicity of Pipe_buf:

NBR 1000

Bw

To verify non-blocking write operations:

When the amount of data requested to write is greater than that of PIPE_BUF:

NBR 1000

Nbwg

The amount of data requested to write is not greater than the atomicity of Pipe_buf:

NBR 1000

NbW

Regardless of whether a write-open blocking flag is set, the write atomicity is not guaranteed when the number of bytes requested to write is greater than 4096. But there are essential differences between the two:

For blocking writes, the write operation will wait until the full FIFO has been filled, until all the data has been written, and the requested data will eventually be written to the FIFO;

Instead of blocking writes, it returns (the actual number of bytes written) after the full FIFO's free area, so some data will not be able to be written at all.

Validation for read operations is simpler and is no longer discussed.

2.5 Famous Pipe Application examples

After verifying the corresponding read and write rules, the application instance does not seem to be necessary.

Summary:

Pipelines are used in two areas: (1) Pipelines are often used in the shell (as a redirection of input input), in which the creation of the pipeline is transparent to the user, (2) used for relational interprocess communication, the user creates the pipe himself, and completes the read-write operation.

FIFO can be said to be the extension of the pipeline, to overcome the restrictions of the pipe without name, so that the process can not be related to the same advanced first out communication mechanism to communicate.

Pipelines and FIFO data are byte streams, and applications must identify specific transport "protocols" in advance to propagate messages of specific significance.

It is critical to have a flexible application of pipelines and FIFO to understand their reading and writing rules.

The results of the operation with 1:kill-l show all the signals supported by the current system:

1) Sighup 2) SIGINT 3) Sigquit 4) SIGILL5) Sigtrap 6) SIGABRT

In addition to being used here to illustrate piping applications, the following topics will be categorized into these signals.

Appendix 2: Validation of FIFO open rules (primary validation of write-open dependencies on read-open)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.