Introduction to multi-thread programming in Linux

Source: Internet
Author: User

3. Condition Variables

The previous section describes how to use mutex to share and communicate data between threads. One obvious drawback of mutex is that it has only two States: Lock and non-lock. Conditional variables make up for the lack of mutex lock by allowing the thread to block and wait for another thread to send signals. They are often used together with mutex locks. When a condition variable is used to block a thread, when the condition is not met, the thread often unlocks the corresponding mutex and waits for the condition to change. Once another thread changes the condition variable, it will notify the corresponding condition variable to wake up one or more threads that are blocked by this condition variable. These threads will re-lock the mutex and re-test whether the conditions are met. In general, condition variables are used for line-to-line synchronization.

The condition variable structure is pthread_cond_t. The pthread_cond_init () function is used to initialize a condition variable. Its prototype is:

Extern int pthread_cond_init _ p (pthread_cond_t * _ cond ,__ const pthread_condattr_t * _ cond_attr ));

Cond is a pointer to the structure pthread_cond_t, and cond_attr is a pointer to the structure pthread_condattr_t. The structure pthread_condattr_t is the attribute structure of the condition variable. Like the mutex lock, we can use it to set whether the condition variable is available in the process or between processes. The default value is pthread _ process_private, this condition variable is used by various threads in the same process. Note that the initialization condition variables can be reinitialized or released only when they are not used. The function for releasing a condition variable is pthread_cond _ destroy (pthread_cond_t Cond ).

The pthread_cond_wait () function blocks the thread on a condition variable. Its function prototype is:

Extern int pthread_cond_wait _ p (pthread_cond_t * _ cond, pthread_mutex_t * _ mutex ));

The thread unlocks the lock pointed to by mutex and is blocked by the condition variable cond. The thread can be awakened by the pthread_cond_signal function and the pthread_cond_broadcast function. However, it must be noted that the condition variable only blocks and wakes up the thread. the user must provide the specific judgment conditions, for example, whether the value of a variable is 0 is shown in the following example. After the thread is awakened, it will re-check whether the conditions are met. If the conditions are not met, the thread should still be blocked here and be waiting for the next wake-up. This process is generally implemented using the while statement.

Another function used to block threads is pthread_cond_timedwait (). Its prototype is:

Extern int pthread_cond_timedwait _ p (pthread_cond_t * _ cond, pthread_mutex_t * _ mutex, _ const struct timespec * _ abstime ));

It has one more time parameter than the pthread_cond_wait () function. After a period of time in abstime, blocking is also removed even if the condition variable is not met.

The prototype of the function pthread_cond_signal () is:

Extern int pthread_cond_signal _ p (pthread_cond_t * _ Cond ));

It is used to release a thread that is blocked on the condition variable cond. When multiple threads are blocked on this condition variable, which thread is awakened is determined by the thread's scheduling policy. Note that the mutex lock of the Protection Condition variable must be used to protect this function. Otherwise, the signal meeting the condition may be sent between the test condition and the call of the pthread_cond_wait function, this causes unlimited waiting. The following is a simple example of using the functions pthread_cond_wait () and pthread_cond_signal.

Pthread_mutex_t count_lock;
Pthread_cond_t count_nonzero;
Unsigned count;
Decrement_count (){
Pthread_mutex_lock (& count_lock );
While (COUNT = 0)
Pthread_cond_wait (& count_nonzero, & count_lock );
Count = count-1;
Pthread_mutex_unlock (& count_lock );
}

Increment_count (){
Pthread_mutex_lock (& count_lock );
If (COUNT = 0)
Pthread_cond_signal (& count_nonzero );
Count = count + 1;
Pthread_mutex_unlock (& count_lock );
}

When the Count value is 0, the decrement function is blocked at pthread_cond_wait and the mutual exclusion lock count_lock is enabled. At this time, when the increment_count function is called, The pthread_cond_signal () function changes the condition variable and informs decrement_count () to stop blocking. Readers can try to let the two threads run the two functions separately to see what results will appear.

The pthread_cond_broadcast (pthread_cond_t * Cond) function is used to wake up all threads blocked on the condition variable cond. After these threads are awakened, they will compete for the corresponding mutex lock again. Therefore, you must use this function with caution.

4. semaphores

Semaphores are essentially non-negative integer counters used to control access to public resources. When public resources increase, the sem_post () function is called to increase the semaphore. Public resources can be used only when the signal value is greater than 0. after use, the sem_wait () function reduces semaphores. The sem_trywait () function plays the same role as the pthread _ mutex_trylock () function. It is a non-blocking version of The sem_wait () function. Next we will introduce some functions related to semaphores one by one, which are defined in the header file/usr/include/semaphore. h.

The data type of the semaphore is sem_t, which is essentially a long integer. The sem_init () function is used to initialize a semaphore. Its prototype is:

Extern int sem_init _ p (sem_t * _ SEM, int _ pshared, unsigned int _ value ));

SEM is a pointer to the semaphore structure. If pshared is not 0, the semaphore is shared among processes. Otherwise, it can only be shared among all threads of the current process. value indicates the initial value of the semaphore.

The sem_post (sem_t * SEM) function is used to increase the semaphore value. When a thread is blocked on this semaphore, calling this function will make one of the threads not blocked. The selection mechanism is also determined by the thread scheduling policy.

The sem_wait (sem_t * SEM) function is used to block the current thread until the semaphores SEM value is greater than 0. After blocking is removed, the SEM value is reduced by one, indicating that the public resources are reduced after use. The sem_trywait (sem_t * SEM) function is a non-blocking version of The sem_wait () function, which directly reduces the semaphores SEM value by one.

The sem_destroy (sem_t * SEM) function is used to release semaphores.

Here is an example of using semaphores. In this example, there are a total of four threads, two of which are responsible for reading data from the file to the public buffer, the other two threads read data from the buffer for different processing (addition and multiplication ).

/* File Sem. C */
# Include <stdio. h>
# Include <pthread. h>
# Include <semaphore. h>
# Define Max stack 100
Int stack [maxstack] [2];
Int size = 0;
Sem_t SEM;
/* Read data from the file 1. dat. Each time it is read, the semaphore is incremented by one */
Void readdata1 (void ){
File * fp = fopen ("1.dat"," R ");
While (! Feof (FP )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}
/* Read data from file 2. dat */
Void readdata2 (void ){
File * fp = fopen ("2.dat"," R ");
While (! Feof (FP )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}
/* Block wait for the buffer to have data. After reading the data, release the space and continue waiting */
Void handledata1 (void ){
While (1 ){
Sem_wait (& SEM );
Printf ("plus: % d + % d = % DN", stack [size] [0], stack [size] [1],
Stack [size] [0] + stack [size] [1]);
-- Size;
}
}

Void handledata2 (void ){
While (1 ){
Sem_wait (& SEM );
Printf ("Multiply: % d * % d = % DN", stack [size] [0], stack [size] [1],
Stack [size] [0] * stack [size] [1]);
-- Size;
}
}
Int main (void ){
Pthread_t T1, T2, T3, T4;
Sem_init (& SEM, 0, 0 );
Pthread_create (& T1, null, (void *) handledata1, null );
Pthread_create (& T2, null, (void *) handledata2, null );
Pthread_create (& T3, null, (void *) readdata1, null );
Pthread_create (& T4, null, (void *) readdata2, null );
/* Prevent the program from exiting too early and keep it waiting for an indefinite period of time */
Pthread_join (T1, null );
}

In Linux, run the GCC-lpthread Sem. C-o SEM command to generate the executable file SEM. We have edited the data file in advance. dat and 2.dat. assume that their content is 1 2 3 4 5 6 7 8 9 10 and-1-2-3-4-5-6-7-8-9-10, respectively., run SEM to obtain the following results:

Multiply:-1 *-2 = 2
Plus:-1 +-2 =-3
Multiply: 9*10 = 90
Plus:-9 +-10 =-19
Multiply:-7 *-8 = 56
Plus:-5 +-6 =-11
Multiply:-3 *-4 = 12
Plus: 9 + 10 = 19
Plus: 7 + 8 = 15
Plus: 5 + 6 = 11

We can see the competition between threads. The value is not displayed in the original order because the value of size is randomly modified by various threads. This is often a problem that needs to be paid attention to in multi-threaded programming.

  Summary

Multi-threaded programming is a very interesting and useful technology. Network ant financial, which uses multithreading technology, is one of the most commonly used download tools, the grep using multithreading technology is several times faster than the single-thread grep. There are many other similar examples. I hope you can use multithreading technology to write efficient and practical programs.

Address: http://www.7dspace.com/doc/19/0601/200611802085785920_4.htm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.