UNIX shared memory

Source: Internet
Author: User
Shared memory is the most useful communication method between processes and the fastest IPC format. The shared memory of two different processes A and B means that the same physical memory is mapped to the process address space of process a and process B. Process A can immediately see the updates to data in the shared memory of process B, and vice versa. Because multiple processes share the same memory area, a synchronization mechanism is required. mutex locks and semaphores can both be used.
One obvious advantage of using shared memory communication is high efficiency, because the process can directly read and write the memory without any data copying. For communication methods such as photo channels and message queues, You need to copy data four times in the kernel and user space, while the shared memory only copies data twice [1]: one from the input file to the shared memory area, and the other from the shared memory area to the output file. In fact, during Memory sharing between processes, the ing is not always removed after a small amount of data is read and written. When there is new communication, the shared memory area is re-established. Instead, keep the shared area until the communication is complete. In this way, the data content is stored in the shared memory and not written back to the file. The content in the shared memory is often written back to the file when the ing is removed. Therefore, the efficiency of communication with shared memory is very high. (2)

Shared Memory (shared memory) is the communication method between multiple processes in Unix. This method is usually used for multi-process communication of a program, in fact, multiple programs can transmit information through shared memory. This article describes how to implement Memory sharing among multiple programs in the Client/Server mode.
Problem Analysis
To share memory among multiple programs, the first problem to be solved is how to allow each program to access the same memory and the same semaphore. The shared memory ID can be obtained by calling the shmget (key_t key, size_t size, int shmflg) function. The semaphore ID can be obtained by calling semget (key_t key, int nsems, int semflg) function. In fact, as long as the two functions use the same key value, each program can achieve the purpose of sharing memory. UNIX calls the key_t ftok (const char * path, int ID) function to generate the key value. If each program uses the same parameter to call this function, naturally, the same key value is obtained. In this example, each program uses key = ftok ("/", 0) to get the same key value, and then the key is worth the same shared memory ID and semaphore ID.
The second problem to be solved is how to control multiple programs to access the shared memory concurrently. The example in this article simulates the operations in the Client/Server mode where one server generates data and multiple clients read data. The conventional method is to set a semaphore and process the program that accesses the shared memory as the critical section. When the program enters, the P () operation is used to obtain the lock. When the program exits, the V () operation is used to release the lock. But there are two problems: first, each program is equal, and in reality, the server priority should be higher than the client. For example, in the stock market application, the server is responsible for regular updates of the market information in the shared memory; the client is a CGI program that reads data in the shared memory as required by the customer, then feedback to the customer. In this case, the server cannot start writing after all client processes have finished reading the data, because the data obtained by the client is outdated. Second, because each client is a read operation, there is no need for mutual exclusion.
The solution to these two problems is: only the server performs the P () and V () operations, the initial semaphore value is set to 0, and the P () operations add one, V () before the client reads the shared memory, it must wait for the semaphore value to be 0, so that the P () operation on the server is always successful, and after the P () operation on the server, clients that have not yet entered the critical section can only be read after the server executes the V () operation. In this way, the server takes precedence over the client, and the client is not mutually exclusive. But this produces another problem: when a server starts writing, some clients may already enter the critical section, and the reading may be incomplete. Therefore, the example is based on the premise that the client program is relatively simple, will not be blocked, and can complete the read operation in a time slice. In this example, the number of clients in the critical section is limited. If the server waits for a time slice (in this example, waiting for one minute), the client can exit the critical section and this problem can be ruled out. Many CGI programs can meet this assumption. If the client does not meet the condition, a sub-process that can access the shared memory can be generated. The execution time of the sub-process must meet the preceding requirements.
Application Instance
Part of the Code Below is an example program that implements Multi-program shared memory:
1. server programs
# Derefined segsize 1024
# Define readtime 1
Union semun {
Int val;
Struct semid_ds * Buf;
Ushort_t * array;
};
// Generate a semaphore
Int SEM (key_t key ){
Union semun SEM;
Int Semid;
SEM. Val = 0;
Semid = semget (Key, 1, ipc_creat | 0666 );
If (Semid =-1 ){
Printf ("create semaphore error/N ");
Exit (-1 );
}
// Initialize the semaphore
Semctl (Semid, 0, setval, SEM );
Return Semid;
}
// Delete semaphores
Void d_sem (INT Semid ){
Union semun SEM;
SEM. Val = 0;
Semctl (Semid, 0, ipc_rmid, 0 );
}
Int P (INT Semid ){
Struct sembuf SOPs = {0, + 1, ipc_nowait };
Return (semop (Semid, & SOPs, 1 ));
}
Int V (INT Semid ){
Struct sembuf SOPs = {0,-1, ipc_nowait };
Return (semop (Semid, & SOPs, 1 ));
}
Int main (){
Key_t key;
Int shmid, Semid;
Char * SHM;
Char MSG [7] = "data ";
Char I;
Struct shmid_ds Buf;
Key = ftok ("/", 0 );
Shmid = shmget (Key, segsize, ipc_creat | 0604 );
If (shmid =-1 ){
Printf ("create shared momery error/N ");
Return-1;
}
SHM = (char *) shmat (shmid, 0, 0 );
If (INT) SHM =-1 ){
Printf ("Attach shared momery error/N ");
Return-1;
}
Semid = SEM (key );
For (I = 0; I <= 3; I ++ ){
Sleep (1 );
P (Semid );
Sleep (readtime );
MSG [5] = '0' + I;
Memcpy (SHM, MSG, sizeof (MSG ));
Sleep (58 );
V (Semid );
}
Shmdt (SHM );
Shmctl (shmid, ipc_rmid, & BUF );
D_sem (Semid );
Return 0;
}
2. Client Program # define segsize 1024
Union semun {
Int val;
Struct semid_ds * Buf;
Ushort_t * array;
};
// Print the program execution time
Void secondpass (){
Static long start = 0;
Time_t timer;
If (START = 0 ){
Timer = Time (null );
Start = (long) timer;
Printf ("now start/N ");
}
Printf ("Second: % LD/N", (long) (Time (null)-Start );
}
Int SEM (key_t key ){
Union semun SEM;
Int Semid;
SEM. Val = 0;
Semid = semget (Key, 0, 0 );
If (Semid =-1 ){
Printf ("Get semaphore error/N ");
Exit (-1 );
}
Return Semid;
}
// Wait until the semaphore is 0
Void waitv (INT Semid ){
Struct sembuf SOPs = {0, 0 };
Semop (Semid, & SOPs, 1 );
}
Int main (){
Key_t key;
Int shmid, Semid;
Char * SHM;
Char MSG [100];
Int I;
Key = ftok ("/", 0 );
Shmid = shmget (Key, segsize, 0 );
If (shmid =-1 ){
Printf ("Get shared momery error/N ");
Return-1;
}
SHM = (char *) shmat (shmid, 0, 0 );
If (INT) SHM =-1 ){
Printf ("Attach shared momery error/N ");
Return-1;
}
Semid = SEM (key );
For (I = 0; I <3; I ++ ){
Sleep (2 );
Waitv (Semid );
Printf ("the MSG get is/n % s/n", SHM + 1 );
Secondpass ();
}
Shmdt (SHM );
Return 0;
}

To be continued ...... // ------------------------------------------------------------------ Shared memory under UNIX

//************************************** *******
// Version: 1.0.0
// Author: Zhangcf@lianchuang.com
// Copyright: Copyleft, free
// Purpose: sharedmem
// Date: 2004-9-3
//************************************** *******

# Ifndef csharedmem_h _
# Define csharedmem_h _

# Include <sys/types. h>
# Include <sys/IPC. h>
# Include <sys/SHM. h>
# Include <sys/SEM. h>
# Include <string. h>
# Include <stdio. h>
# Include <pthread. h>

# Include <sys/Mman. h>
# Include <sys/types. h>
# Include <fcntl. h>
# Include <unistd. h>

Class csharedmem
{
Public:
Union semun
{
Int val;
Struct semid_ds * Buf;
Unsigned short int * array;
Struct seminfo * _ Buf;
};

Csharedmem ();
~ Csharedmem ();

PRIVATE:
// Generate a semaphore
Int create_sem (key_t key );
 
// Delete semaphores
Void delete_sem ();
 
Int P ();
Int V ();
 
Int m_nsegsize; // the size of the shared content.
Char * m_pszcontent; // ing area pointer
Int m_nsemid; // create a signal key
Pthread_mutex_t m_hmutex; // thread lock
Public:
Bool initsegsize (INT nsegsize = 1024 );
Bool read (char * pszcontent, int nreadbytenum, bool bdeletecontent = true );
Bool writer (char * pszcontent, int nwriterbytenum );
};

# Endif // csharedmem_h _

 

**************************************** **************************

**************************************** **************************

**************************************** **************************

**************************************** **************************

//************************************** *******
// Version: 1.0.0
// Author: Zhangcf@lianchuang.com
// Copyright: Copyleft, free
// Purpose: sharedmem
// Date: 2004-9-3
//************************************** *******

# Include "sharedmem. H"

Csharedmem: csharedmem ()
{
M_pszcontent = NULL;
/* Use the default attribute to initialize a mutex lock object */
Pthread_mutex_init (& m_hmutex, null );
Printf ("/ncsharedmem start ...");
}

Csharedmem ::~ Csharedmem ()
{
Munmap (m_pszcontent, sizeof (char) * m_nsegsize );
Printf ("/n umap OK/N ");
Delete_sem ();
Pthread_mutex_destroy (& m_hmutex );
Printf ("/ncsharedmem end/N ");
}

// Generate a semaphore
Int csharedmem: create_sem (key_t Kkey)
{
Union semun SEM;
SEM. Val = 0;
M_nsemid = semget (ipc_private, 1, ipc_creat | 0666 );
If (m_nsemid =-1)
{
Printf ("create semaphore error/N ");
Return-1;
}
// Initialize the semaphore
Semctl (m_nsemid, 0, setval, SEM );
Return m_nsemid;
}

// Delete semaphores
Void csharedmem: delete_sem ()
{
Union semun SEM;
SEM. Val = 0;
Semctl (m_nsemid, 0, ipc_rmid, 0 );
}

Int csharedmem: P ()
{
// Struct sembuf SOPs = {0, + 1, ipc_nowait };
// Return (semop (m_nsemid, & SOPs, 1 ));
Printf ("/n starts to lock.../N ");
Pthread_mutex_lock (& m_hmutex );
Printf ("... locked/N ");
}

Int csharedmem: V ()
{
// Struct sembuf SOPs = {0,-1, ipc_nowait };
// Return (semop (m_nsemid, & SOPs, 1 ));
Printf ("/n starts to unlock.../N ");
Pthread_mutex_unlock (& m_hmutex );
Printf ("... unlocked/N ");
}

Bool csharedmem: initsegsize (INT nsegsize)
{
// Initialize the semaphore
Key_t Kkey;
Kkey = ftok ("/", 0 );
Int nret = create_sem (KKey );
If (nret =-1)
Return false;

Char * pfilename = "./sharedmem1111 ";
M_nsegsize = nsegsize;

// Move to the end of the file with the '/0' character
Int FD = open (pfilename, o_creat | o_append | o_rdwr, 0777 );
Lseek (FD, sizeof (char) * m_nSegSize-1, seek_set );
Write (FD, "", 1 );
 
Void * pvoid = MMAP (null, sizeof (char) * m_nsegsize, prot_read | prot_write, map_shared, FD, 0 );
Close (FD );

If (pvoid = map_failed)
{
Printf ("MMAP Error 1/N ");
Return false;
}
M_pszcontent = (char *) pvoid;

// Strncpy (m_pszcontent, "abcdef", 5 );
Printf ("/n initiallize over: % s/n", m_pszcontent );
Return true;
}

Bool csharedmem: Read (char * pszcontent, int nreadbytenum, bool bdeletecontent)
{
P ();
Memcpy (pszcontent, m_pszcontent, nreadbytenum );
Pszcontent [nreadbytenum] = '/0 ';

If (bdeletecontent)
Memset (m_pszcontent, '/0', bdeletecontent );
V ();
Return true;

}

Bool csharedmem: writer (char * pszcontent, int nwriterbytenum)
{
P ();
Strncat (m_pszcontent, pszcontent, nwriterbytenum );
V ();
Return true;
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.