Linux inter-process communication classification and pipe principle implementation

Source: Internet
Author: User
Tags exit in

Reprinted: http://blog.csdn.net/sunmenggmail/article/details/7888746

 

A large application system usually requires collaboration among many processes. The importance of communication between processes (see Linux Process Concept 1) is obvious. This series of articles describes several main means of inter-process communication in the Linux environment, and provides detailed examples for the key technical links of each communication means. In order to clarify the problem, this article also analyzes the internal implementation mechanism of some communication means.

Collation

The process communication methods in Linux are basically inherited from the process communication methods on the UNIX platform. The Bell Labs and BSD (Berkeley Software release center at the University of California Berkeley), which have contributed significantly to the development of UNIX, have different emphasis on inter-process communication. The former improves and expands the inter-process communication methods in the early days of UNIX to form a "System v ipc", and the communication process is limited to a single computer; the latter skips this restriction and forms a socket-based inter-process communication mechanism. In Linux, the two are inherited, as shown in:



Among them, the original Unix IPC includes: Pipeline, FIFO, signal; System v ipc includes: System V message queue, System V signal light, System V shared memory zone; posix ipc includes: POSIX message queue, POSIX signal lamp, and POSIX shared memory zone. There are two points to explain: 1) due to the diversity of UNIX versions, the Institute of electronic and electrical engineering (IEEE) developed an independent UNIX standard, this new ansi unix standard is called the portable operating system interface (psoix) in the computer environment ). Most of the existing UNIX and popular versions follow the POSIX standard, while Linux has followed the POSIX standard from the very beginning; 2) BSD is not involved in inter-process communication within a single machine (socket itself can be used for inter-process communication within a single machine ). In fact, many UNIX versions of a single machine
IPC has traces of BSD, such as the anonymous memory ing supported by 4.4bsd and the implementation of reliable signal semantics by over 4.3 BSD.

Figure 1 shows the various IPC methods supported by Linux. In the subsequent discussions in this article, in order to avoid conceptual confusion, we should try to mention as few UNIX versions as possible, the discussion of all issues will eventually come down to inter-process communication in the Linux environment. In addition, Linux supports different implementation versions of communication methods (for example, POSIX shared memory and System V shared memory ), this section mainly introduces POSIX APIs.

Introduction to several main methods for inter-process communication in Linux:

  1. Pipeline (PIPE) and famous Pipeline (Named Pipe): pipeline can be used for communications between kinship processes. Famous pipelines overcome the pipe's no-name restrictions. Therefore, in addition to the functions of pipelines, it also allows communication between unrelated processes;
  2. Signal: a signal is a complex communication method used to notify the receiving process of an event. In addition to inter-process communication, the process can also send a signal to the process itself; in addition to the early UNIX signal semantic function Sigal, Linux also supports the signal function sigaction whose semantics complies with the posix.1 standard (in fact, this function is based on BSD, BSD in order to achieve reliable signal mechanism, it can also unify external interfaces and implement the signal function again using the sigaction function );
  3. Message Queue: A chain table of messages, including the POSIX Message Queue System V message queue. A process with sufficient permissions can add messages to the queue. A process with the read permission can read messages from the queue. The message queue overcomes the disadvantages of low signal carrying information, and the pipeline can only carry unformatted byte streams and limited buffer size.
  4. Shared Memory: Allows multiple processes to access the same memory space. It is the fastest available IPC format. It is designed to reduce the running efficiency of other communication mechanisms. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and mutual exclusion between processes.
  5. Semaphore (semaphore) is mainly used for synchronization between processes and between different threads of the same process.
  6. Socket: a more general inter-process communication mechanism that can be used for inter-process communication between different machines. It was initially developed by the BSD branch of the UNIX system, but now it can be transplanted to other UNIX-like systems: both Linux and System V variants support sockets.

 

Introduction: This article mainly introduces the basic concepts and usage of pipelines (PIPE), analyzes the storage, access and implementation methods of circular buffers, and analyzes possible problems caused by concurrent access, the solution is provided. The Read and Write Functions of pipe in the Linux Kernel 2.6.29 are analyzed.

1. Pipeline (PIPE)

Pipelines are one of the main means of inter-process communication. An MPs queue is actually a file that only exists in the memory. Operations on this file must be performed through two files that have been opened, which represent the two ends of the MPs queue respectively. A pipe is a special file. It does not belong to a file system, but an independent file system with its own data structure. According to the applicability of the MPs queue, the MPs queue can be classified into an unnamed MPs queue and a named MPs queue.

● Unknown MPs queue

It is mainly used between parent and child processes, or between two sibling processes. In Linux, a one-way communication pipeline can be established through system calls, and this relationship can only be established by the parent process. Therefore, each pipe is unidirectional. When bidirectional communication is required, two pipelines need to be established. Processes at both ends of the MPs queue view the MPs queue as a file. One process writes content to the MPs queue and the other processes read content from the MPs queue. This type of transmission follows the "first in first out" (FIFO) rule.

● Named Pipe

The named pipeline is designed to solve the defect that an unknown pipeline can only be used for inter-relatient processes. A named pipe is a file that has its own name on the actual disk media or File System (instead of only in memory, any process can contact the file at any time by file name or path name. To implement named pipelines, a new file type-FIFO file (following the first-in-first-out principle) is introduced ). Implementing a named pipe is actually implementing a FIFO file. Once a named pipe is created, its read, write, and close operations are identical to that of a common pipe. Although the inode node of the FIFO file is on the disk, it is only one node. The file data still exists on the memory buffer page, which is the same as that of a common pipeline.

2. Circular Buffer

Each MPs queue has only one page as a buffer. This page is used as a circular buffer. This access method is a typical "producer-consumer" model. When a "producer" process has a large amount of data to write, and every time a page is full, it needs to wait for sleep, waiting for the "consumer" to read some data from the pipeline, make room for it. Correspondingly, if there is no readable data in the pipeline, the "consumer" process will have to wait for sleep, as shown in.

Figure 1 producer-consumer relationship diagram

2.1 implementation principle of circular buffer

Ring buffer is a common important data structure in embedded systems. Generally, storage is in the form of arrays, that is, to apply for a continuous linear space in the memory, you can allocate the storage space at one time during initialization. To simulate a ring, you must logically connect the beginning and end of the array. You only need to perform special processing on the last element of the array-when accessing the next element of the tail element, return to the Header element. Only the buffer length is required for returning from the tail end to the header (assuming that maxlen is the length of the ring buffer, when the read pointer is directed to the tail element, you only need to execute READ = read % maxlen to bring the read back to the Header element ).

Figure 2 circular buffer Diagram

2.2 read/write operations

The circular buffer must maintain two indexes: write and read. When writing data, you must first ensure that the buffer is not full before writing data, and finally point the write pointer to the next element. When reading data, you must first ensure that the buffer is not empty, then return the element corresponding to the read pointer, and finally point the read to the next element. Pseudo code for read/write operations:

2.3 judge "full" and "empty"

When read and write point to the same position, the ring buffer is empty or full. To distinguish between full and empty loops, the ring is empty when read and write overlap. When write is faster than read and there is an element interval between read and read, it means that the ring is full. Figure 3 shows the ring buffer schematic.

Figure 3 implementation principle of the ring Buffer

3 concurrent access

Considering that tasks may have different access conditions to the ring buffer in different environments, You need to analyze the concurrent access conditions.

In a single task environment, only one read task and one write task exist. You only need to ensure that the write task can write data smoothly, and the read task can read the data in a timely manner. If competition arises, the following situations may occur:

Case1: If the write task is interrupted when the "Write pointer plus 1, pointing to the next writable null position" is executed, as shown in 3, the write Pointer Points to an invalid position. When the system schedules read task execution, if the read task needs to read multiple data, not only should the read data be read, but when the read pointer is adjusted to 0, read the previously read data repeatedly.

Figure 4 Invalid Pointer writing

Case2: assume that the read task is interrupted when "read pointer plus 1" is executed. As shown in figure 4, the read position is invalid. When the system schedules write tasks for execution, if the write task needs to write more than one data, when the write Pointer Points to the end, the buffer should be full and cannot be written, however, because the read pointer is in an invalid position, before the read task is executed, the write task will continue with the write operation because the buffer zone of the task is empty. This will overwrite the data that has not been written and read.

Figure 5 invalid read pointer

To avoid the above errors, you must ensure that the read/write pointer operation is atomic. The read/write pointer value is either not modified or modified correctly. Semaphores can be introduced to effectively protect the code in the critical section to avoid these problems. In a single task environment, you can also take appropriate measures to avoid the use of semaphores, thus improving the program execution efficiency.

4. Pipe read/write implementation in Linux Kernel

The Linux kernel uses the struct pipe_inode_info structure to describe a pipeline.

When pipe is empty or full, the queue uses the waiting queue, which uses the spin lock for protection.

Describe the buffer of pipe using the struct pipe_buffer Data Structure)

This article focuses on how to operate the circular buffer in pipe implementation, with the aim of learning its mutex access method. Therefore, the pipe_read and pipe_write methods are analyzed.

● Pipe_read (FS/pipe. c)

Inode for accessing pipe must obtain the corresponding mutex lock to prevent concurrent access.

The data is read in an endless loop, and the code in the entire for loop belongs to the critical section, which requires mutual exclusion lock for protection.

Exit in the following situations:

▲Complete Data Reading;

▲Pipe has no writer Process

▲O_nonblock flag is set for the Process

325 rows are read from the buffer. Then adjust the pointer position in the buffer.

The flag is set in row 348, and do_wakeup is 1, which indicates that data can be written to a buffer at an empty location. In this case, the write process waiting for sleep in the queue can be awakened.

If the data is not exited or successfully read, the read process will actively call the pipe_wait function for sleep and wait until a writer process writes data and wakes it up.

When a process exits from the critical section, the mutex lock is released.

 

Finally, in order to prevent the reader process from exiting because of the receipt of the semaphore, the sleeping writer process will be given another opportunity to check do_wakeup. If it is set to 1, the sleeping writer process will be awakened.

● Pipe_write (FS/pipe. c)

First, like pipe_read, pipe_write uses mutex to protect the critical section. Write operations are also placed in an endless loop, and the exit condition is the same as read.

Unlike pipe_read, the writer process does not always wait for sleep. After calling pipe_wait for sleep, if a read process reads some data, the write process will perform write operations at any time.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.