Linux Thread Synchronization

Source: Internet
Author: User

Thread



When multiple threads share the same memory, make sure that the shared data is consistent for each thread at each time.
If the variables used by each thread are not used by other threads (read & write), or the variables are read-only, there is no consistency problem. However,
If two or more threads can read/write a variable, they must be synchronized to ensure that they do not get invalid values when accessing the variable,
You can also modify the variable and make it take effect.

The above is what we call thread synchronization.

Thread Synchronization has three common mechanisms: mutex, rwlock, and cond ).

Mutex has two states: Lock and unlock, which ensure that only one thread accesses data at a time;

There are three statuses of read/write locks: read/write locks, write locks, and no locks. Only one thread can occupy the read/write locks in the write mode, but multiple threads can simultaneously occupy the read/write locks in the Read mode.

Conditional variables provide a place for multiple threads to join. When used together with mutex, the threads are allowed to wait for the occurrence of specific conditions in a non-competitive manner.

Mutex

In essence, mutex is a lock that provides protection for access to shared resources.

1. Initialization:

In
Linux



The thread mutex data type is pthread_mutex_t. Before use, initialize it:

For the mutex of static allocation, you can set it to pthread_mutex_initializer or call pthread_mutex_init.

For the dynamically allocated mutex, after applying for memory (malloc), initialize with pthread_mutex_init and call pthread_mutex_destroy before releasing the memory (free.

  • Prototype:

    • Int pthread_mutex_init (pthread_mutex_t * restrict mutex, const pthread_mutexattr_t * restric ATTR );
    • Int pthread_mutex_destroy (pthread_mutex_t * mutex );
  • Header file: <pthread. h>
  • Return Value: 0 is returned for success, and error number is returned for error.
  • Note: If you use the default attribute to initialize mutex, you only need to set attr to NULL. Other values will be explained later.

2. mutex operation:

To access shared resources, You need to lock the mutex. If the mutex has been locked, the calling thread will be blocked until the mutex is unlocked. after accessing the shared resources, unlock the mutex.

First, let's talk about the locking function:

  • Header file: <pthread. h>
  • Prototype:
    • Int pthread_mutex_lock (pthread_mutex_t * mutex );
    • Int pthread_mutex_trylock (pthread_mutex_t * mutex );
  • Return Value: 0 is returned for success, and error number is returned for error.
  • Description: The trylock function is a non-blocking call mode. That is,
    If the mutex is not locked, the trylock function locks the mutex and obtains access to the shared resources. If the mutex is locked,
    The trylock function does not block the wait and returns EBUSY directly, indicating that the shared resources are busy.

Let's talk about the function:

  • Header file: <pthread. h>
  • Prototype: int pthread_mutex_unlock (pthread_mutex_t * mutex );
  • Return Value: 0 is returned for success, and error number is returned for error.

3. deadlock:

A deadlock occurs when multiple dependency locks exist, and occurs when a thread tries to lock the mutex in the reverse order of the other thread. It is important to avoid a deadlock when mutex is used.

In general, there are several unwritten basic principles:

  • You must obtain the lock before operating on shared resources.
  • The lock must be released after the operation is completed.
  • Use the lock as soon as possible.
  • If there are multiple locks, for example, if the order is ABC, the release order should also be ABC.
  • When a thread returns an error, it should release the lock it obtained.


Semaphores

Semaphores are essentially non-negative integer counters used to control access to public resources. When public resources increase, the call function sem_post () increases.
Semaphores. Public resources can be used only when the signal value is greater than 0. after use, the sem_wait () function reduces semaphores. Functions sem_trywait () and functions
Pthread _

Mutex_trylock () plays the same role. It is a non-blocking version of The sem_wait () function. Next we will introduce some functions related to semaphores one by one.
/Usr/include/semaphore. h.
The data type of the semaphore is sem_t, which is essentially a long integer. The sem_init () function is used to initialize a semaphore. Its prototype is:
Extern int sem_init _ p (sem_t * _ SEM, int _ pshared, unsigned int
_ Value ));
SEM is a pointer to the semaphore structure. If pshared is not 0, the semaphore is shared among processes. Otherwise, it can only be shared among all threads of the current process. value indicates the initial value of the semaphore.

The sem_post (sem_t * sem) function is used to increase the semaphore value. When a thread is blocked on this semaphore, calling this function will make one of the threads not blocked. The selection mechanism is also determined by the thread scheduling policy.
The sem_wait (sem_t * sem) function is used to block the current thread until the semaphores sem value is greater than 0. After blocking is removed, the sem value is reduced by one, indicating that the public resources are reduced after use. Function sem_trywait
(Sem_t * sem) is a non-blocking version of the function sem_wait (), which directly reduces the semaphores sem value by one.
The sem_destroy (sem_t * sem) function is used to release semaphores.
Here is an example of using semaphores. In this example, there are a total of four threads, two of which are responsible for reading data from the file to the public buffer, the other two threads read data from the buffer for different processing (addition and multiplication ).
/* File sem. c */
# Include <stdio. h>
# Include <pthread. h>
# Include <semaphore. h>
# Define Max stack 100
Int stack [MAXSTACK] [2];
Int size = 0;
Sem_t sem;
/* Read data from the file 1. dat. Each time it is read, the semaphore is incremented by one */
Void ReadData1 (void ){
FILE * fp = fopen ("1.dat"," r ");
While (! Feof (fp )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}

/* Read data from file 2. dat */
Void readdata2 (void ){
File * fp = fopen ("2.dat"," R ");
While (! Feof (FP )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}
/* Block wait for the buffer to have data. After reading the data, release the space and continue waiting */
Void handledata1 (void ){
While (1 ){
Sem_wait (& SEM );
Printf ("plus: % d + % d = % d/N", stack [size] [0], stack [size] [1],
Stack [size] [0] + stack [size] [1]);
-- Size;
}
}






Void
Handledata2 (void ){
While (1 ){
Sem_wait (& sem );
Printf ("Multiply: % d * % d = % d/n", stack [size] [0], stack [size] [1],
Stack [size] [0] * stack [size] [1]);
-- Size;
}
}

Int main (void ){
Pthread_t t1, t2, t3, t4;
Sem_init (& sem, 0, 0 );
Pthread_create (& t1, NULL, (void *) HandleData1, NULL );
Pthread_create (& t2, NULL, (void *) HandleData2, NULL );
Pthread_create (& t3, NULL, (void *) ReadData1, NULL );
Pthread_create (& t4, NULL, (void *) ReadData2, NULL );
/* Prevent the program from exiting too early and keep it waiting for an indefinite period of time */
Pthread_join (t1, NULL );
}







In Linux, we use the command gcc
-Lpthread sem. c-o sem generates an executable file sem. We have edited the data files 1. dat and 2. Assume that their contents are respectively 1 2.
3 4 5 6 7 8 9 10 and-1-2-3-4-5-6-7-8-9-10, run sem and obtain the following results:
Multiply:-1 *-2 = 2
Plus:-1 +-2 =-3
Multiply: 9*10 = 90
Plus:-9 +-10 =-19
Multiply:-7 *-8 = 56
Plus:-5 +-6 =-11
Multiply:-3 *-4 = 12
Plus: 9 + 10 = 19
Plus: 7 + 8 = 15
Plus: 5 + 6 = 11

We can see the competition between threads. The value is not displayed in the original order because the value of size is randomly modified by various threads. This is often a problem that needs to be paid attention to in multi-threaded programming.






Read
Write lock

As mentioned in the first article of the thread synchronization series, read/write locks can have higher concurrency because they have three statuses.

1. features:

Only one thread can occupy the read/write lock in the write mode at a time, but multiple threads can simultaneously occupy the read/write lock in the Read mode,

  • When the read/write lock is in the write lock status, all threads trying to lock the lock will be blocked before the lock is unlocked.
  • When a read/write lock is in the locking status, all threads that attempt to lock it in the Read mode can obtain the access permission. However, if the thread wants to lock the lock in the write mode, it must be blocked to know that all threads release the lock.
  • Generally, when a read/write lock is locked in Read mode, if another thread attempts to lock the lock in write mode, the read/write lock will usually block subsequent read Mode Lock requests, in this way, the Read mode lock can be used for a long time, while the waiting write mode lock requests will be blocked for a long time.

2. Applicability:

The read/write lock is suitable for the case where the number of reads to the data structure is much higher than the number of writes. because the read-write lock can be shared at regular intervals and exclusive when it is locked in write mode, the read-write lock is also called share-exclusive lock.

3. Initialization and destruction:

# Include
<
Pthread. h
>





Int
Pthread_rwlock_init (pthread_rwlock_t
*
Restrict rwlock,
Const
Pthread_rwlockattr_t
*

Restrict ATTR );

Int
Pthread_rwlock_destroy (pthread_rwlock_t
*

Rwlock );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

Above mutex, before releasing the memory occupied by read/write locks, you must clear the read/write locks through pthread_rwlock_destroy to release the resources allocated by init.

 

4. Read and Write:

# Include
<
Pthread. h
>





Int
Pthread_rwlock_rdlock (pthread_rwlock_t
*

Rwlock );

Int
Pthread_rwlock_wrlock (pthread_rwlock_t
*

Rwlock );

Int
Pthread_rwlock_unlock (pthread_rwlock_t
*

Rwlock );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

These three functions implement read lock acquisition, write lock acquisition, and lock release Operations respectively. The two functions used to obtain the lock are blocking operations. Similarly, the non-blocking functions are:

# Include
<
Pthread. h
>





Int
Pthread_rwlock_tryrdlock (pthread_rwlock_t
*

Rwlock );

Int
Pthread_rwlock_trywrlock (pthread_rwlock_t
*

Rwlock );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

Non-blocking lock acquisition operation. If yes, 0 is returned; otherwise, an error ebusy is returned.


Condition variable

Condition variables are divided into two parts: Condition and variable. The condition itself is protected by mutex. The thread must lock the mutex before changing the condition state.

1. Initialization:

The data type used by the condition variable is pthread_cond_t. Initialization is required before use. Two methods are available:

  • Static: the constant pthread_cond_initializer can be assigned to the static conditional variable.
  • Dynamic: The pthread_cond_init function is used to clean up the memory space of the dynamic condition variable before it is released.
# Include
<
Pthread. h
>





Int
Pthread_cond_init (pthread_cond_t
*
Restrict cond, pthread_condattr_t
*

Restrict attr );

Int
Pthread_cond_destroy (pthread_cond_t
*

Cond );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

When the ATTR parameter of pthread_cond_init is null, a conditional variable of the default attribute will be created. It will be discussed after non-default conditions.

 

2. Waiting conditions:

# Include
<
Pthread. h
>





Int
Pthread_cond_wait (pthread_cond_t
*
Restrict cond, pthread_mutex_t
*

Restric mutex );

Int
Pthread_cond_timedwait (pthread_cond_t
*
Restrict cond, pthread_mutex_t
*
Restrict mutex,
Const

Struct
Timespec
*

Restrict timeout );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

These two functions are respectively blocked wait and timeout wait.

The wait condition function waits for the condition to become true. The mutex passed to pthread_cond_wait protects the condition,
The caller passes the lock mutex to the function. The function puts the call thread on the list of threads waiting for the condition, and then unlocks the mutex. These two operations are atomic.
In this way, the time channel between the condition check and the thread entering the sleep state waiting for the condition to change the two operations is closed, so that the thread will not miss any change in the condition.

When pthread_cond_wait returns, the mutex is locked again.

 

3. Notification conditions:

# Include
<
Pthread. h
>





Int
Pthread_cond_signal (pthread_cond_t
*

Cond );

Int
Pthread_cond_broadcast (pthread_cond_t
*

Cond );


If the call succeeds, 0 is returned. If the call fails, the error number is returned.

These two functions are used to notify the thread that the condition has been met. call these two functions, also known as sending signals to the thread or condition. You must note that you must send signals to the thread after changing the condition state.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.