Linux under the six major IPC mechanism "turn"

Source: Internet
Author: User
Tags message queue posix semaphore root access

Transfer from http://blog.sina.com.cn/s/blog_587c016a0100nfeq.html

Introduction to several main methods of interprocess communication IPC under Linux:

Pipe and well-known pipe (named pipe): Pipelines can be used for communication between affinity processes, and well-known pipelines overcome the limitations of pipe without name, so that, in addition to having the functions of a pipeline, it allows communication between unrelated processes;
Signal (Signal): signal is a more complex mode of communication, used to inform the receiving process of an event occurred, in addition to inter-process communication, the process can also send signals to the process itself; Linux in addition to supporting early UNIX signal semantic function Signal, Also support the semantics of the POSIX.1 standard signal function sigaction (in fact, the function is based on BSD, BSD in order to achieve a reliable signal mechanism, but also able to unify the external interface, with sigaction function to re-implement the signal function);
Message queue (Message Queuing): Messages queue is a linked table of messages, including POSIX Message Queuing system V Message Queuing. A process with sufficient permissions can add messages to the queue, and a process that is given Read permission can read the messages in the queue. Message queue overcomes the disadvantage that the signal carrying information is low, the pipeline can only carry the unformatted byte stream and the buffer size is limited.
Shared memory: Allows multiple processes to access the same piece of memory space and is the fastest available IPC form. is designed for inefficient operation of other communication mechanisms. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and mutual exclusion between processes.
Semaphore (semaphore): primarily as a means of synchronization between processes and between different threads of the same process.
Socket: A more general inter-process communication mechanism that can be used for inter-process communication between different machines. Originally developed by the BSD branch of the UNIX system, it can now be ported to other Unix-like systems: both Linux and System V variants support sockets.

Pipeline

The two ends of the pipe can be described by the description character Fd[0] and fd[1], and it is important to note that the ends of the pipe are fixed on the task. That is, one end can only be used for reading, represented by the description word fd[0], which is called the pipe reading end, and the other end can only be used for writing, by the description word fd[1] to be said to be the pipe write end. If you attempt to read data from the pipe write end, or write data to the pipe read end, it will cause an error to occur. I/O functions for general files can be used for pipelines such as close, read, write, and so on.

    • Only one-way data streams are supported;
    • Can only be used between processes that have affinity;
    • No Name;
    • The buffer of the pipeline is finite (piping is present in memory and is allocated a page size for the buffer when the pipeline is created);
    • The pipeline transmits the unformatted byte stream, which requires that the reader and writer of the pipeline must agree the format of the data beforehand, such as how many bytes count as a message (or command, or record), etc.

A significant limitation of pipeline applications is that it has no name, so it can only be used for inter-process communication with affinity, which is overcome when a named pipe (named pipe or FIFO) is presented. A FIFO differs from a pipe in that it provides a path name associated with it, which exists in the file system as a FIFO file. Thus, even processes that do not have affinity to the FIFO creation process, as long as they can access the path, can communicate with each other through the FIFO (the process that accesses the path and the creation process of the FIFO), so that processes that are not related to FIFO can also exchange data. It is important to note that FIFO strictly adheres to first-in, FIFO, which reads from the beginning of the pipeline, and is always returning data from the start, and writes the data to the end. They do not support file location operations such as Lseek ().
Pipelines are commonly used in two areas: (1) Pipelines are often used in the shell (as input input redirects), in which case the pipeline is created transparently to the user, (2) is used for inter-process communication with affinity, the user creates the pipeline himself, and reads and writes.
FIFO can be said to be the promotion of pipelines, overcome the pipe No name restrictions, so that the non-affinity process can also be used in first-out communication mechanism for communication.
Pipelines and FIFO data are byte streams, and applications must identify specific transport "protocols" in advance, using messages that propagate a particular meaning.
To flexibly apply pipelines and FIFO, it is critical to understand their read and write rules.

Message Queuing

A message queue is a linked list of messages. You can think of a message as a record, with a specific format and a specific priority. A process that has write permission to a message queue can add new messages to a certain rule, and a process that has read access to a message queue can read messages from the message queue. Message Queuing is persistent with the kernel. The signal is continuous with the process.

There is a limit to the capacity of each message queue (the number of bytes that can be accommodated), and this value differs depending on the system.

Message Queuing has greater flexibility than pipelines and well-known pipelines, first of all, it provides formatted byte streams that help reduce the workload of developers, and secondly, messages have types that can be used as a priority in real-world applications. These two points are not comparable to pipelines and famous pipelines. Similarly, Message Queuing can be reused across several processes, regardless of whether the processes are related, which is similar to a well-known pipeline, but Message Queuing is persistent with the kernel and is more powerful and more space-capable than a well-known pipeline, which continues with the process.

Signal

The signal is the only asynchronous communication mechanism in the interprocess communication mechanism, which can be regarded as asynchronous notification and what happens in the process of notifying the receiving signal. After POSIX real-time expansion, the signaling mechanism is more powerful and can deliver additional information in addition to the basic notification function.

Signal Volume

1, one system call SEMOP can be operated simultaneously the number of semaphores semopm,semop parameter Nsops If this number is exceeded, a e2big error will be returned. The size of the SEMOPM is specific to the system, Redhat 8.0 is 32.
2, the maximum number of lights: SEMVMX, when the set semaphore value exceeds this limit, will return the Erange error. In Redhat 8.0, the value is 32767.
3, the maximum number of signal sets within the system range Semmni and the maximum number of signal lights in the system range semmns. Exceeding these two limits will return a ENOSPC error. The value in Redhat 8.0 is 32000.
4. The maximum number of semaphores in each beacon set is 250 in Semmsl,redhat 8.0. SEMOPM and SEMVMX should be noted when using SEMOP calls, and Semmni and Semmns should be noted when calling Semget. SEMVMX is also a semctl call to be aware of.

So if it is a share buffer, if the buffer is large, the signal is not available.

Shared memory

Shared memory can be said to be the most useful inter-process communication and the fastest form of IPC. Two different processes A, B shared memory means that the same piece of physical memory is mapped to the respective process address space of process A and B. Process A can instantly see that process B updates the data in shared memory, and vice versa. Because multiple processes share the same block of memory, there is a need for some kind of synchronization mechanism, both mutexes and semaphores.
One obvious benefit of using shared memory communication is that it is efficient because the process can read and write directly to the memory without requiring any copy of the data. For communication methods such as pipelines and message queues, four copies of the data are required in the kernel and user space, while shared memory copies only two data [1]: One from the input file to the shared memory area, and the other from the shared memory area to the output file. In fact, when you share memory between processes, you do not always have to read and write small amounts of data, and then re-establish the shared memory area when there is new communication. Instead, the shared area is maintained until the communication is complete, so that the data content is kept in shared memory and is not written back to the file. Content in shared memory is often written back to a file when it is de-mapped. Therefore, the use of shared memory communication mode is very efficient.

Socket

I/O multiplexing provides a capability that enables processes to get this information in a timely manner when an I/O condition is met. I/O multiplexing is a common application where processes need to handle multiple descriptive words. One advantage of this is that the process is not blocking on a real I/O call, but rather blocking on the select () Call, and select () can handle multiple descriptors at the same time, and if the I/O of all the descriptors it handles is not in the prepared state, it will block; if there is one or more descriptive characters i/ O is in the Ready state, select () does not block, and the appropriate I/O is taken according to the specific descriptive words that are prepared.

The main introduction is the Pf_inet communication domain, to achieve inter-process communication between the Internet. A socket interface based on a UNIX communication domain (which specifies that the communication domain is pf_local when the socket is called) enables interprocess communication between machines. There are several advantages to using the UNIX communication domain socket interface: The UNIX communication domain socket interface is typically twice times the speed of the TCP socket interface, and the other advantage is that it is possible to pass the descriptor between processes through the UNIX communication domain socket interface. All the objects described in the descriptive words, such as files, pipelines, well-known pipes and sockets, can be passed through a UNIX domain-based socket interface after we get the description of the object in some way.

The original socket interface provides functionality not provided by the general set of interfaces:

    • The original socket interface can read and write some control protocol groupings for control, such as ICMPV4, which can realize some special functions.
    • The original socket interface can read and write special IPV4 packets. The kernel typically processes only a few packets of specific protocol fields, and some packets that require different protocol fields need to be read and written through the original set of interfaces;
    • It is also interesting to construct your own Ipv4 head through the original set of interfaces.
    • Root permission is required to create the original socket interface.

Access to the data link layer allows the user to listen to all the groupings on the local cable without using any special hardware devices, and reading the data link layer groupings under Linux requires creating a sock_packet type of socket and requiring root access.

Linux under the six major IPC mechanism "turn"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.