Linux process communication (IPC)

Source: Internet
Author: User
Linux process communication (IPC) destination data transmission: a process needs to send its data to another process, and the data volume sent is between one byte and several megabytes. Share Data: multiple processes want to operate on shared data. one process modifies the shared data. Other processes... information
Linux process communication (IPC) destination data transmission: a process needs to send its data to another process, and the data volume sent is between one byte and several megabytes. Share Data: multiple processes want to operate on shared data. if a process modifies the shared data, other processes should immediately see it. Notification event: a process needs to send a message to another process or a group of processes to notify it of an event (for example, to notify the parent process when the process is terminated ). Resource sharing: multiple processes share the same resource. To achieve this, the kernel needs to provide a lock and synchronization mechanism. Process control: some processes want to completely control the execution of another process (such as the Debug process). at this time, the control process wants to block all the traps and exceptions of another process, and be able to know its status changes in time. Process Communication methods in www.2cto.com linux: (1) pipelines (pipe) and named pipelines (FIFO) (2) signals (signal) (3) message Queue (4) shared memory (5) semaphore (semaphore) (6) socket (socket) pipeline (pipe) and famous pipeline (named pipe ): pipelines can be used for communications between kinship-related processes. famous pipelines overcome the restriction that pipelines do not have a name. Therefore, in addition to the functions of pipelines, it also allows communication between unrelated processes. A pipe is a one-way, first-in-first-out, non-structured, fixed-size byte stream that connects the standard output of one process to the standard input of another process. The write process writes data at the end of the pipeline, and the read process reads data at the beginning of the pipeline. After reading the data, it will be removed from the pipeline. No other read process can read the data. Pipelines provide a simple flow control mechanism. When a process tries to read an empty MPs queue, the process will be blocked until data is written to the MPs queue. Similarly, when the pipeline is full, the process tries to write the pipeline again. before other processes move data from the pipeline, the write process will be blocked. There are usually some restrictions: one is half duplex, only one-way transmission; the other is only available between parent and child processes.
Famous pipelines (also known as FIFO, because pipelines work first in first out, the data written to the first pipeline is also the first data to be read ). Unlike pipelines, FIFO is not a temporary object. they are real entities in the file system and can be created using the mkfifo command. As long as you have the appropriate access permissions, the process can use FIFO. The FIFO mode is slightly different from that of the MPs queue. A single pipe (its two file data structures, vfs I nodes, and shared data pages) is created at one time, while a FIFO already exists and can be opened and closed by its users. In Linux, the read process must be enabled before the write process opens the FIFO, or the read process must read the pipeline before the write process writes data. In addition, FIFO is almost the same as pipeline processing, and they use the same data structure and operation. Www.2cto.com signal (signal): The signal is a complex communication method used to notify the receiving process of an event. in addition to inter-process communication, the process can also send signals to the process itself; in addition to the early Unix signal semantic function sigal, linux also supports the signal function sigaction whose semantics complies with the Posix.1 standard (in fact, this function is based on BSD, BSD in order to achieve reliable signal mechanism, it can also unify external interfaces and implement the signal function again using the sigaction function ). Signals are a simulation of the interrupt mechanism at the software level and an asynchronous communication method.
The signal can directly interact between the user space process and the kernel process, and the kernel process can also be used to notify the user space process of system events. It can be sent to a process at any time without knowing the status of the process. If the process is not in the running state, the signal is saved by the kernel until the process resumes execution and then transmitted to it. if a signal is set to blocked by the process, the transmission of the signal is delayed until the blocking is canceled. Process execution signal: ignore the signal, that is, the signal is not processed, of which two signals cannot be ignored: SIGKILL and SIGSTOP. Capture signals and define signal processing functions. when a signal occurs, execute corresponding processing functions. Perform the default operation. Linux specifies the default operation for each signal. Message Queue www.2cto.com: message queue is a chain table of messages, including Posix message queue System V message queue. A process with sufficient permissions can add messages to the queue. a process with the read permission can read messages from the queue. The message queue overcomes the disadvantages of low signal carrying information, and the pipeline can only carry unformatted byte streams and limited buffer size. The implementation of Message Queue includes four operations: creating or opening a message queue, adding a message, reading a message, and controlling a message queue:
The function used to create or open a message queue is msgget. the number of message queues created here is limited by the number of system message queues. The msgsnd function is used to add messages to the end of an opened message queue. The msgrcv function is used to read a message, which removes the message from the message queue. Unlike FIFO, a message can be removed here. The function used to control message queues is msgctl, which can complete multiple functions. Semaphores/semaphores (semaphore): used for synchronization between processes and between different threads of the same process. Semaphores are a communication mechanism between processes to solve the problem of synchronization and mutex between processes, including a variable called semaphores and a waiting queue of processes waiting for resources under the semaphores, and two atomic operations (PV operations) on the Semaphore ). Semaphores correspond to a type of resource and take a non-negative integer value. A signal value refers to the number of currently available resources. if it is equal to 0, no resources are available. P operation: if there are available resources (the signal value is greater than 0), one resource will be occupied (subtract one from the signal value and enter the critical code ). If there are no available resources (the signal value is equal to 0), it will be blocked until the system allocates resources to the process (enter the waiting queue and wait until the resource is the turn of the process ). V operation: if a process is waiting for resources in the wait queue of the semaphore, a blocking process is awakened. If no process is waiting for it, release a resource (add one to the signal value ). Www.2cto.com shared memory (shared memory) is the most useful method for inter-process communication and the fastest IPC format. The shared memory of two different processes A and B means that the same physical memory is mapped to the process address space of process A and process B. Process A can immediately see the updates to data in the shared memory of Process B, and vice versa. Because multiple processes share the same memory area, a synchronization mechanism is required. mutex locks and semaphores can both be used. One obvious advantage of using shared memory communication is high efficiency, because the process can directly read and write the memory without any data copying. For communication modes such as image channel and message queue, you need to copy the data four times in the kernel and user space, while the shared memory only copies the data twice: one from the input file to the shared memory area, and the other from the shared memory area to the output file. In fact, during memory sharing between processes, the ING is not always removed after a small amount of data is read and written. when there is new communication, the shared memory area is re-established. Instead, keep the shared area until the communication is complete. in this way, the data content is stored in the shared memory and not written back to the file. The content in the shared memory is often written back to the file when the ING is removed. Therefore, the efficiency of communication with shared memory is very high. Steps for implementing shared memory: www.2cto.com 1. create shared memory. the Function used here is shmget, that is, to obtain a shared memory area from the memory. 2. map the shared memory, that is, map the shared memory created in this section to a specific process Space. the shmat function is used here. 3. use the I/O read/write command without buffering to operate on it. 4. undo the ing operation. its function is shmdt. Socket: a more general inter-process communication mechanism that can be used for inter-process communication between different machines. It was initially developed by the BSD branch of the Unix System, but now it can be transplanted to other Unix-like systems: both Linux and System V variants support sockets. For details, refer to UNIX Network Programming 2nd-volume inter-process communication
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.