Introduction to Linux Process Communication (IPC) _linux shell

Source: Internet
Author: User
Tags data structures message queue posix semaphore

Purpose of interprocess communication

Data transfer: A process needs to send its data to another process, sending data between a byte and a few megabytes.
Shared data: Multiple processes want to manipulate shared data, one process changes to shared data, and other processes should be seen immediately.
Notification Event: A process needs to send a message to another or a group of processes notifying it (them) of an event (such as notifying the parent process when the process terminates).
Resource sharing: The same resources are shared between multiple processes. To do this, the kernel is required to provide a lock and synchronization mechanism.
Process Control: Some processes want to fully control the execution of another process (such as the debug process), at which point the control process wants to be able to intercept all the falls and exceptions of another process and be able to know its state changes in a timely manner.

Process communication mode

Several main ways to communicate between processes in Linux:

(1) pipe (pipe) and famous pipe (FIFO)
(2) signal (signal)
(3) Message Queuing
(4) Share memory (shared memory)
(5) Signal Volume (semaphore)
(6) socket (socket)

Pipeline

Pipelines (pipe) and well-known pipelines (named pipe): Pipelines can be used for communication between relational processes, and a well-known pipeline overcomes the limitations of a pipe without a name, so it allows communication between unrelated processes in addition to the functionality that the pipe has.
A pipe is a one-way, first-out, unstructured, fixed-size byte stream that connects the standard output of a process to the standard input of another process. The write process writes data at the end of the pipe, and reads the data at the top of the pipe. When the data is read out, it is removed from the pipeline, and no other read process can read the data again. The pipeline provides a simple flow control mechanism. When a process attempts to read an empty pipe, the process blocks until the data is written to the pipe. Similarly, when the pipe is full, the process attempts to write the pipe, and the write process blocks until the other process removes the data from the pipe. There is usually a limit, half duplex, only one-way transmission, and the other is only used between parent-child processes.
A well-known pipe (also called FIFO, because the pipeline work first out of the principle, the first to write the pipeline data is also the first read out of the data). Unlike pipelines, FIFO is not a temporary object, they are real entities in the file system and can be created with the Mkfifo command. The process can use FIFO as long as the appropriate access rights are available. FIFO opens in a slightly different way from the pipe. A pipe (its two file data structures, VFS I nodes, and shared data pages) is created at once, and FIFO already exists and can be opened and closed by its users. Linux must process the read process to open before the write process opens FIFO, and must also handle reading processes to the pipe before writing the data. In addition, FIFO is almost exactly the same as pipe handling, and they use the same data structure and operations.

Signal

Signal (signal): A signal is a more complex form of communication used to inform the receiving process that an event occurs, in addition to communication between processes, the process can send signals to the process itself; Linux in addition to supporting the UNIX early signal semantic function Sigal, It also supports the signal function sigaction of semantics conforming to POSIX.1 standard (in fact, the function is based on BSD, BSD in order to achieve reliable signal mechanism, but also can unify the external interface, with the Sigaction function to realize the signal function).
Signal is a simulation of the interrupt mechanism at the software level, and it is an asynchronous communication method.
The signal can interact directly between the user space process and the kernel process, and the kernel process can also use it to inform users of what system events are happening in the space process. It can be sent to a process at any time without having to know the state of the process.
If the process is not currently in an executing state, the signal is saved by the kernel until it is resumed and passed to it; If a signal is blocked by a process, the signal is passed to the process until its blocking is canceled.

How the process performs the signal:
Ignore the signal, that is, the signal does not do any processing, of which, there are two signals can not be ignored: Sigkill and Sigstop.
Capture the signal, define the signal processing function, and perform the corresponding processing function when the signal occurs.
The default action is performed, and Linux sets the default action for each of the signals.

Message Queuing

Message Queuing: Message Queuing is a linked table of messages, including POSIX Message Queuing system V Message Queuing. A process with sufficient permissions can add messages to the queue, and processes that are given Read permission can read messages in the queue. Message Queuing overcomes the lack of information load, the pipeline can only host unformatted byte streams and buffer size limitations.
The implementation of Message Queuing includes the four actions of creating or opening message queues, adding messages, reading messages, and controlling Message Queuing:
The function used to create or open Message Queuing is msgget, and the number of message queues created here is limited by the number of system message queues.
The function that is used to add a message is the MSGSND function, which adds the message to the end of the open message queue.
The function used to read the message is MSGRCV, which takes the message away from the message queue, which, unlike FIFO, can be specified to take a message out.
The function that controls the use of Message Queuing is MSGCTL, which can accomplish a number of functions.

Signal Volume/semaphore

Semaphore (semaphore): primarily as a means of synchronization between processes and between different threads of the same process. Semaphores are a process communication mechanism used to resolve synchronization and mutex problems between processes, including a variable called semaphores and a process waiting queue to wait for resources under the semaphore, and two atomic operations (PV operations) on the semaphore. Where the semaphore corresponds to a resource, a non-negative integer value is taken. The semaphore value refers to the number of resources currently available, if it equals 0, which means there are currently no resources available.

P Operation: If there are available resources (semaphore value >0), then occupy a resource (give the semaphore value minus one, enter the critical section code). If no resources are available (the semaphore value equals 0), it is blocked until the system assigns the resource to the process (into the wait queue until the resource turns to the process).
V Operation: If there is a process waiting for a resource in the waiting queue for the semaphore, a blocking process is awakened. If no process is waiting for it, release a resource (add one to the semaphore value).

Shared memory

Shared memory can be said to be the most useful interprocess communication method and the fastest form of IPC. Two different processes A, B shared memory means that the same physical memory is mapped to process A, B's respective process address space. Process A can instantly see Process b update of the data in shared memory, and vice versa. Because multiple processes share the same chunk of memory, it is necessary to have some sort of synchronization mechanism, mutual exclusion locks and semaphores.

One obvious benefit of using shared memory traffic is that it is efficient because processes can read and write directly to memory without requiring copies of any data. For communication like pipelines and Message Queuing, four copies of the data are required in the kernel and user space, while shared memory copies only two times: one from the input file to the shared memory area and the other from the shared memory area to the output file. In fact, when sharing memory between processes, the mappings are not always read and written, and when there is new communication, the shared memory area is again established. Instead, keep the shared area until the communication is complete so that the data content is kept in shared memory and not written back to the file. Content in shared memory is often written back to the file when the mapping is released. Therefore, the use of shared memory communication mode is very high efficiency.

Steps for shared memory implementations:
1. Create shared memory, where the function used is shmget, which is to get a shared memory area from memory.
2. Mapping shared memory, which is to map the shared memory created to the specific process space, the function used here is Shmat.
3. Use an I/O read-write command without buffering to manipulate it.
4. Undo mapping operation, its function is SHMDT.

Socket interface

Socket: A more general interprocess communication mechanism that can be used for interprocess communication between different machines. Originally developed by the BSD branch of the UNIX system, it is now generally possible to migrate to other Unix-like systems: Linux and System V variants support sockets.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.