Organize from the network
Unix IPC includes: piping (pipe), named pipe (FIFO) and signal (Signal)
Piping (pipe)
Pipelines can be used for communication between affinity processes, and well-known pipelines overcome the limitations of pipe without name, so that, in addition to having the functions of a pipeline, it allows communication between unrelated processes;
Implementation mechanism:
A pipeline is a buffer that is managed by the kernel, which is the equivalent of a note we put into memory. One end of the pipeline connects to the output of a process. This process will put information into the pipeline. The other end of the pipeline connects to the input of a process that takes out the information that is put into the pipeline. A buffer does not need to be large, it is designed to be a circular data structure so that the pipeline can be recycled. When there is no information in the pipeline, the process read from the pipeline waits until the process at the other end puts the information. When the pipeline is filled with information, the process that tries to put it in will wait until the process on the other side takes out the information. When all two processes are terminated, the pipeline disappears automatically.
In principle, the pipeline is built using the fork mechanism, so that two processes can be connected to the same pipe. At the beginning, the two arrows above are connected to the same process 1 (two arrows connected to process 1). The two connections are also copied to the new process (process 2) when the fork is copied. Subsequently, each process closes itself without the need for a connection (two black arrows are turned off; Process 1 Closes the input connection from the pipe, process 2 turns off the output to the pipe connection, so that the remaining red connections form the pipe.
Implementation Details:
In Linux, the implementation of pipelines does not use a dedicated data structure, but instead uses the file structure of the filesystem and the index node inode of the VFS. By pointing two file structures to the same temporary VFS index node, the VFS index node points to a physical page. Such as
There are two file data structures, but they define file operation routines where the address is different, one is a routine address that writes data to the pipeline, and the other is a routine address that reads data from the pipeline. In this way, the system call of the user program is still the usual file operation, but the kernel uses this abstraction mechanism to realize the special operation of the pipeline.
About pipeline reading and writing
Pipeline implementation of the source code in FS/PIPE.C, there are many functions in the pipe.c, of which two functions are more important, namely the pipe reading function pipe_read () and the pipe write function Pipe_wrtie (). The pipe write function writes the data by copying the bytes to the physical memory pointed to by the VFS index node, while the pipe read function reads the data by copying the bytes in the physical memory. Of course, the kernel must use a mechanism to synchronize access to the pipe, and for this reason the kernel uses locks, waits for queues, and signals.
When the write process writes to the pipeline, it takes advantage of the standard library function write (), and the file structure of the file can be found by the system based on the document descriptor passed by the library function. The file structure specifies the address of the function (that is, the write function) that is used for the write operation, and the kernel calls the function to complete the write operation. Before the Write function writes data to memory, it must first check the information in the VFS index node and meet the following criteria for actual memory copy work:
• There is enough space in the memory to hold all the data to be written;
• The memory is not locked by the read program.
If the above conditions are met, the Write function first locks the memory and then copies the data from the address space of the write process to memory. Otherwise, the write process sleeps in the wait queue of the VFS index node, and then the kernel calls the scheduler, and the scheduler chooses other processes to run. The write process is actually in an interruptible wait state, and the write process will receive a signal when there is enough space in the memory to hold the write data, or when the memory is unlocked, the read process wakes the write process. When the data is written to memory, the memory is unlocked, and all the read processes that hibernate on the index node are awakened.
The read process of the pipeline is similar to the write process. However, the process can return an error message immediately when no data or memory is locked, rather than blocking the process, depending on the open mode of the file or pipeline. Conversely, a process can hibernate the wait queue in the index node waiting for the write process to write to the data. When all processes have completed the pipeline operation, the index node of the pipeline is discarded, and the shared data page is freed.
Linux function prototypes
#include <unistd.h>int pipe (int filedes[2]);
Filedes[0] is used to read the data, the read must close the write end, that is, close (filedes[1]);
FILEDES[1] is used to write data and must be closed on the read side, which is close (Filedes[0]).
Program Examples:
int main (void) { int n; int fd[2]; pid_t pid; Char Line[maxline]; if (pipe (FD) 0) {/ * First establish pipeline to get a pair of file descriptors * /exit (0); } if (PID = fork ()) 0)/ * The parent process copies the file descriptor to the child process * /exit (1); else if (PID > 0) {/ * parent Process Write * /close (fd[0]); /* Turn off the Read descriptor * /write (fd[1], "\nhello world\n"); } else{ /* Sub-process Read * /close (fd[1]); /* Close Write End * /n = Read (fd[0], line, MAXLINE); Write (Stdout_fileno, line, n); } Exit (0);}
Named pipes (named pipe)
Because of the fork mechanism, pipelines can only be used between parent and child processes, or between two child processes that have the same ancestor (between processes that are related to each other). To solve this problem, Linux provides a FIFO way to connect the process. FIFO is also called named pipe (named pipe).
FIFO (first in, first out) is a special type of file that has a corresponding path in the file system. When a process opens the file in a read (R) manner and another process opens the file in a write (w), the kernel creates a pipeline between the two processes, so the FIFO is actually managed by the kernel and does not deal with the hard disk. The FIFO is called because the pipeline is essentially a FIFO queue data structure, the first to be put into the first read out, thus guaranteeing the order of information exchange. FIFO just borrows the file system, named pipe is a special type of file, because everything in Linux is a file, and it exists as a file name in the filesystem. ) to name the pipeline. The write-mode process writes to the FIFO file, while the read-mode process is read from the FIFO file. When the FIFO file is deleted, the pipe connection disappears as well. The advantage of FIFO is that we can identify the pipeline through the path of the file, so that there is no affinity between the processes to establish a connection
Function Prototypes:
#include <sys/types.h> #include <sys/stat.h>int mkfifo (const char *filename, mode_t mode), int mknode (const Char *filename, mode_t mode | S_ififo, (dev_t) 0);
Where pathname is the name of the file being created,mode indicates the permission bit to be set on the file and the type of file that will be created (in this case , S_ififo), andDev is a value to use when creating a special file for the device. Therefore, for FIFO files it has a value of 0.
Linux interprocess communication Pipeline (pipe), named pipe (FIFO) and signal (Signal)