In the way that Linux processes communicate, shared memory is one of the fastest IPC methods. Therefore, shared memory is used to achieve a large amount of data transfer between processes, shared memory, the memory will be a separate memory space, this memory space has its own unique data structure, including access permissions, size and the time of recent visits.
Why is shared memory the fastest form of IPC? Let's take a look at the following diagram: From this diagram, we can see that the use of pipelines (fifo/Message Queuing) to transfer information from one file to another needs to be replicated 4 times. First, the server copies the information from the appropriate file into the server staging buffer, and the second is to copy the information from the staging buffer to the pipeline (fifo/Message Queuing), and the third is that the client copies the message from the pipe (fifo/message queue) to the client side buffer; Copies the information from the client temporary buffer to the output file. This is a message delivery process for other ways that are not shared memory, so let's take a closer look at how this messaging mechanism works for shared memory. Let's take a look at the following diagram: From this diagram, we can see that the shared memory message is replicated only two times. One is from the input file to the shared memory, and the second, from the shared memory to the output file. This increases the efficiency of data access to a large extent.
Shared memory is the fastest way to IPC