Inter-process communication

Source: Internet
Author: User

In the process of user application in the use of the C Library of interprocess communication functions, in fact, these inter-process communication functions in the kernel through the system to invoke a good file system implementation of the mechanism.

1 piping

Pipelines are shared files that are used only to connect read and write processes to achieve communication between them. Thus it is also known as shared files. Provides the input send process (that is, the write process) to the pipeline (the shared file), sending large amounts of data into the pipeline as a character stream. The receiving process (that is, the read process) that accepts the pipeline output can receive data from the pipeline. Because the sending process and the receive process communicate with the pipeline process, these shared files are collectively referred to as pipelines.

In order to coordinate the communication between the two sides, the pipeline communication mechanism must provide the following three aspects of coordination capacity.

    • Mutually exclusive. When a process is reading and writing to a pipeline, another process must wait.
    • Synchronous. When the write (input) process writes a certain amount of data to the pipeline, it goes to sleep and waits until the read (output) process takes the data and wakes it up. When the read process reads an empty pipe, it should also sleep until the write process has written the data to the pipeline before it wakes it up.
    • Determine the existence of both sides of the communication. You can communicate with each other only if you are sure that both parties to the communication already exist.

The pipeline is a fixed-size buffer with a buffer size of 1 pages, or 4KB. The pipeline uses the file structure of the filesystem and the index node inode of the VFS. By pointing two file structures to the same temporary VFS index node, the index node points to a physical page and implements the pipeline. The file operation addresses that they define are different, one is a routine address that writes data to the pipeline, and the other is a routine address that reads data from the pipeline. In this way, the system call of the user program is still the usual file system, and the kernel uses this abstract mechanism to realize the special operation of the pipeline.

Linux supports Named pipes. A named pipe is a special kind of FIFO file, which has the same name as a normal file and is accessed like a normal file. It always works according to the principle of "FIFO", which is also referred to as a first line. FIFO pipelines are not temporary objects, they are entities in the file system and can be created by using the Mkfifo command.

2 Message Queuing

A message queue is a linked list of messages. One or more processes that have permissions can read and write message queues.

The processing logic for send and receive functions like pipe lines is as follows:

If the recipient does not find a wait message, it registers itself in the list of waiting recipients. The sender checks the list before adding a new message to the message array. If there is a waiting recipient, it ignores the message array and processes the message directly on the receiver.

The recipient receives the message and returns without a grab queue spin lock.

3 Shared Memory

Process A, B shared memory refers to the same piece of physical memory that is mapped to the process, a, and the respective process address space. Process A can see in a timely manner that process B updates the data in shared memory, and vice versa.

The shared memory mode has MMAP () system calls, POSIX shared memory, and System V shared memory. One of the mmap () system calls is to access this mapping between different processes and eventually achieve the purpose of shared memory by opening and mapping ordinary files to memory in different processes.

Each newly created shared memory region is represented by a SHMID_DS data structure. They are stored in the Shm_segs array. The SHMID_DS data structure describes the size of shared memory, how processes are used, and how shared memory maps to their respective address spaces. The shared memory creator Controls access to this memory and whether its keys are public or private. If it has sufficient permissions, it can also load this shared memory into physical memory.

Each process that uses this shared memory must connect it to virtual memory through a system call. The process then creates a new vm_area_struct structure to describe this shared memory.

The new vm_area_struct structure is placed in the vm_area_struct linked list, which is directed by Shmid_ds. Link them up with vm_next_shared and vm_prev_shared pointers. the link is not created when it is present in the virtual, and is created when the process accesses it .

When a process accesses a page in shared virtual memory for the first time, a page error occurs. When this page is retrieved, Linux finds the VM_AREA_STRUCT data structure that describes the page. It contains an address pointer to a handler function that uses this type of virtual memory. The shared memory page error handling code will look for this shared virtual memory page in the Page table entry list for this shmid_ds. If it does not exist, it assigns a physical page and creates a page entry for it. It is also placed in the page table of the current process, which is stored in the SHMID_DS structure. This means that the next process attempting to access this memory also produces a page fault, and the shared memory error handler uses its newly created physical page for this process . In this way, the first process that accesses the virtual memory page creates this memory, and the subsequent process adds the page to its own virtual address space .

When the process no longer shares this virtual memory, the connection between the process and the shared memory is broken. If other processes are still using this memory, this action affects only the current process. Its corresponding vm_area_struct structure will be removed and recycled from the SHMID_DS structure. The page table entry for the current process corresponding to this shared memory address is also updated and invalidated.

When the last process disconnects from the shared memory, the shared memory page that is currently in physical memory is freed and the SHMID_DS structure of the shared memory is freed.

4 signal

A signal used to send an asynchronous event signal to one or more processes is a simulation of the interrupt mechanism at the software level. Only the signals are asynchronous in the interprocess communication mechanism.

From the reliable method, the signal is divided into reliable signal and unreliable signal. From the real time, the signal can be divided into real-time signal and non-real-time signal.

Unreliable signals are not real-time signals. Unreliable signals the response of the signal is set to the default action after each process.

A reliable signal is a real-time signal, and a reliable signal is a new signal added later.

The difference between real-time and non-real-time signals is that real-time signal queuing ensures that multiple signals sent will be received.

5 Signal Volume

The semaphore is primarily to provide access control mechanisms for shared resources between processes, ensuring that only one process is accessible at a time. A semaphore set is a collection of semaphores used for synchronization between multiple shared resources processes. The value of the semaphore indicates the number of available shares for the current shared resource, and if a process is requesting a shared resource, subtract the number of requests from the semaphore, and if there are not enough available resources, the process can wait and return immediately.

Inter-process communication

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.