Overview of Shared memory
The shared memory area is the fastest form of IPC, where data delivery is no longer related to the kernel, in other words, processes no longer pass data to each other by executing system calls into the kernel.
That is, each process address space has a mapped area of shared memory, when this area is mapped to the same real physical address space, it is possible to exchange data through this area, such as shared libraries are implemented, and many processes use the same function as printf, perhaps in the real physical address space There is a printf.o, and then all the processes are mapped to this one and PRINTF.O is shared.
To pass data using pipelines or message queues:
To pass data with shared memory:
Even with shared memory delivery data, it reduces the number of times to enter the kernel and improves efficiency, compared with message queues and pipelines.