interprocess communication mode: Pipeline, shared memory, semaphore, signal, message queue.
1. Piping pipe: is a half-duplex communication and can only be used for inter-process communication with affinity (that is, parent-child relationship).
A pipeline is a buffer that is managed by the kernel, a process that enters data from one end of the pipeline, and another that reads data from the other end of the pipeline.
When there is no information in the pipeline, the process that reads the information (read ()) from the pipeline goes into a blocking state, knowing that the process at the other end is putting information. When the pipe is full, the process that attempts to put information (write ()) into the pipeline is blocked, knowing that the process on the other end has taken the information out of the pipeline.
When all two processes are finished, the pipeline disappears.
Reading data from a pipeline is a one-time operation, and once the data is read out, it is discarded from the pipeline, freeing up space for more data to be written.
2. Named pipe FIFO, half-duplex communication, but allows affinity-free interprocess communication.
3. Shared Memory:
Shared memory is the most efficient way to communicate between processes
The process directly reads and writes memory without requiring any copy of the data
Shared memory is a piece of memory that can be accessed by other processes by mapping the memory to the address space of the process that shares it, so that data transfer between these processes no longer involves the kernel, i.e., interprocess communication does not need to be implemented through system calls into the kernel
Shared memory allows two or more processes to share a given store, which can be mapped to its own address space by two or more processes, a process that writes to the shared memory, and can be read by other processes that use this shared memory, through a simple memory read operation. Thus, the communication between processes is realized.
One of the main benefits of using shared memory for communication is high efficiency, because processes can read and write directly to memory without requiring any copy of the data, and for communication methods such as pipelines and the messaging fleet, four copies of the data are required for the kernel and user space, while shared memory is copied two times: once from the input file to the shared memory area. Another time from shared memory to the output file. Shared memory is a piece of memory that can be accessed by other processes by mapping memory to the address space of the process in which it is shared, so that the data transfer between these processes no longer involves the kernel, i.e., interprocess communication does not need to be implemented by a system call into the kernel;
Memory-mapped memories map mechanism enables shared memory between processes by mapping the same common file, implemented through MMAP () system calls. After the normal files are mapped to the process address space, the process can access the files like normal memory without having to tune the Read/write and other file manipulation functions.
or open up shared memory through Shmget:
• In order to exchange information between multiple processes, the kernel specifically leaves a chunk of memory (or the process creates a shared memory in its own memory space)
• Mapped to their private address space by the process that needs to be accessed
• The process directly reads and writes this memory area without having to copy the data to improve efficiency
Multiple processes share a piece of memory and need to rely on some kind of synchronization mechanism, such as mutexes and semaphores
Shared Memory Programming Steps:
1). Create shared memory
• function Shmget ()
• Get a section of shared memory from memory
2). Mapping Shared Memory
• Map the shared memory created by this section to the specific process space
• function Shmat ()
3). Use this shared memory
• It can be manipulated using the I/O read-write command without buffering
4). Undo Map Operation: function Shmdt ()
5). Delete Shared memory: function Shctl ()
4. Signal
A mechanism for communication or manipulation between processes. Signals can be sent to a process at any time without needing to know the status of the process. If the process is not in the execution state, the signal is saved by the kernel until the process resumes execution and is passed to it. If a signal is set by the process to block, the transmission of the signal is delayed until the blocking of the signal is canceled before it is passed to the process.
Signal is a kind of simulation of interrupt mechanism at software level, and it is an asynchronous communication mode. Signals can interact directly between the user space process and the kernel.
1) hardware sources, such as cltr+c, usually generate interrupt signals SIGINT
2) The source of the software, such as using a system call or command to send a signal. The most common system function for sending signals is the Kill,raise,setitimer,sigation,sigqueue function. The software source also includes some operations such as illegal operations.
Once a signal is generated, the user process has three different ways of generating the signal:
1) Perform the default action, and Linux specifies the default action for each signal.
2) Capture the signal, define the signal processing function, and execute the corresponding processing function when the signal occurs.
3) Ignoring the signal, when you do not want the received signal to affect the execution of the process, and let the process continue to execute, you can ignore the signal, that is, no processing of the signal process.
There are two signals that the application process cannot capture and ignore, that is, Sigkill and segstop, so that the system administrator can interrupt or end a particular process at any time.
5. Message Queuing
is a linked list of messages, a series of messages stored in the kernel, a user process that can add messages to a message queue, or read a message to a message queue.
Message Queuing has the advantage of specifying a specific message type for each message when it is received without having to follow the queue order, but rather to receive a specific type of message based on a custom condition.
6. Signal Volume
Divided into named and anonymous semaphores. Named semaphores are typically used between processes that do not share memory (kernel implementations), and anonymous semaphores can be used for thread communication (stored in thread-shared memory, such as global variables), or for interprocess communication (stored in process-shared memory, such as System V/posix shared memory).
Message Queuing, shared memory: similar to System V.
Mutex Mutex + anonymous semaphore: Thread communication
Mutex Mutex + condition variable condition: Thread communication
The PV operation consists of the P operation Primitive and the V Operation Primitive (the primitive is a non-disruptive process). The operation of the semaphore is defined as follows:
P (S):
① reduce the value of the semaphore s by 1, i.e. s = S-1;
② if s >= 0, the process continues to execute; otherwise the process is set to wait, queued to wait.
V (S):
① adds 1 to the value of the semaphore s, i.e. S = s + 1;
② if s > 0, the process continues; otherwise, the first process waiting for semaphores in the queue is freed.
The significance of PV operations: We use semaphores and PV operations to achieve synchronization and mutual exclusion of processes. The PV operation belongs to the low level communication of the process.
The data structure of the semaphore (semaphore) is a value and a pointer that points to the next process waiting for that semaphore. The value of the semaphore is related to the usage of the corresponding resource. When its value is greater than 0 o'clock, it represents the number of resources currently available, and when its value is less than 0 o'clock, its absolute values indicate the number of processes waiting to use the resource. Note that the value of the semaphore can only be changed by the PV operation.
Let's take a look at a concrete example: assuming that the current execution scenario is that there are three threads a,b,c into a critical resource with a semaphore of 1,
1. When thread a enters, perform p operation, sem=0, thread a continues execution.
2. When thread B enters, thread A assumes that the critical resource is still being consumed, B performs P operation, and SEM = -1,b enters the waiting queue.
3. When thread C enters, thread A assumes that the critical resource is still being consumed, C performs P operation, and SEM = -2,c enters the waiting queue
Inter-process communication