Linux multithreaded Programming tutorial (thread through the semaphore to achieve communication code) _linux Shell

Source: Internet
Author: User
Tags error code function prototype mutex semaphore terminates advantage

Thread classification

Threads can be divided into user-level threads and core-level threads according to their dispatchers.

(1) user-level threads
The user-level thread mainly solves the problem of context switching, and its scheduling algorithm and scheduling process are all decided by the user, and do not need specific kernel support at runtime. Here, the operating system often provides a line threading of user space, the line threading provides the function of thread creation, dispatch, Undo, and the kernel still only manages the process. If one of the threads in a process calls a blocking system call, the process, including all other threads in the process, is also blocked. The main disadvantage of this type of user-level thread is the inability to perform multiprocessor advantages in the scheduling of multiple threads in a process.

(2) Core-level threads
This thread allows threads in different processes to be scheduled in accordance with the same relative priority scheduling method, thus giving the concurrency advantage of multiprocessor.
Most systems now adopt a method of coexistence of user-level threads and core-level threads. A user-level thread can correspond to one or several core-level threads, the "one-to-one" or "many-to-many" model. This can not only meet the needs of multiprocessor, but also minimize the scheduling overhead.

The Linux thread implementation is carried out at the core, which provides the interface for creating the process do_fork (). The kernel provides two system invoke clones () and fork (), which eventually invoke the Do_fork () kernel API with different parameters. Of course, in order to implement threads, there is no core for multiple processes (in fact, lightweight processes) to share data segment support is not, therefore, do_fork () provides a number of parameters, including CLONE_VM (shared memory space), CLONE_FS (shared file system information), Clone_ Files (shared file descriptor tables), Clone_sighand (Shared signal handle table), and clone_pid (shared process ID, valid only for the nuclear process, that is, the No. 0 process). When using the fork system call, the kernel calls Do_fork () does not use any shared properties, the process has a separate running environment, and when the thread is created using Pthread_create (), all of these properties are eventually set to invoke __clone (). These parameters are all passed to Do_fork () in the kernel, thus creating a "process" that has a shared running environment in which only the stack is self-contained and passed by __clone ().

Linux threads exist in the kernel as lightweight processes, have separate process table entries, and all creation, synchronization, and deletion operations are performed in the Pthread library. The Pthread library uses an administrative thread (__pthread_manager (), each process is independent and unique) to manage the creation and termination of threads, assign thread IDs to threads, send thread-related signals (such as cancel), and the main thread (Pthread_create () The caller passes the request information to the management thread through a pipe.

Main function Description

1. creation and exit of threads

Pthread_create Thread creation function
int Pthread_create (pthread_t * thread_id,__const pthread_attr_t * __attr,void * (*__start_routine) (void *), void *__restr ICT __arg);

Thread creation function The first argument is a pointer to the thread identifier, the second parameter is used to set the thread properties, the third parameter is the starting address of the thread-running function, and the last parameter is the parameter of the running function. Here, our function thread does not require arguments, so the last argument is set to a null pointer. The second argument is also set to a null pointer, which will generate the thread for the default property. When the line Cheng is created, the function returns 0, if not 0, the creation thread fails, and the common error return code is eagain and einval. The former indicates a system restriction to create a new thread, such as an excessive number of threads, which indicates that the second parameter represents an illegal thread property value. After the creation of the thread succeeds, the newly created thread runs the function of parameter three and parameter four, and the original thread continues to run the next line of code.

Pthread_join function to wait for the end of a thread.
The function prototype is: int pthread_join (pthread_t __th, void **__thread_return)
The first parameter is the waiting thread identifier, and the second parameter is a user-defined pointer that can be used to store the return value of the waiting thread. This function is a thread-blocking function, and the function that calls it waits until the thread that is waiting is finished, and when the function returns, the resource of the waiting thread is retracted. Threads can only be terminated by one thread and should be in the joinable state (not detached).

Pthread_exit function
There are two ways to end a thread, one of which is the end of the function that the thread is running, and the thread that calls it ends;
Another way is to implement it through function pthread_exit. Its function prototype is: void Pthread_exit (void *__retval) The only parameter is the return code of the function, as long as the second parameter in Pthread_join Thread_return is not NULL, the value is passed to the Thread_ Return Finally, one thread cannot be waited by multiple threads, otherwise the first line that receives the signal Cheng returned, and the rest of the Pthread_join thread returns the error code esrch.

2. Thread Properties

The property of the second parameter thread of the Pthread_create function. Set this value to NULL, that is, the default property, where multiple properties of a thread can be changed. These properties mainly include binding properties, detach attributes, stack addresses, stack sizes, and precedence. The default properties of the system are unbound, non-detached, stack of the default 1M, and the same level of precedence as the parent process. The following is an explanation of the basic concepts of binding attributes and detached attributes first.

Binding properties: Linux uses a "one-to-one" threading mechanism, which is a user thread that corresponds to a kernel thread. A binding property means that a user thread is fixed to a kernel thread, because the scheduling of CPU time slices is oriented to kernel threads (that is, lightweight processes), so a thread with bound attributes can guarantee that a kernel thread always corresponds to it when needed. The relative unbound attribute means that the relationship between the user thread and the kernel thread is not always fixed, but the system controls the allocation.

Detach attribute: The detached attribute is used to determine how a thread terminates itself. In a discrete case, when a thread ends, the system resources it occupies are not released, that is, there is no real termination. The created thread can release the system resources it occupies only when the Pthread_join () function returns. In the case of a detach attribute, the system resource that it occupies is immediately released at the end of a thread.
The point to note here is that if you set the Detach property of a thread and the thread runs very fast, it is likely to terminate before the Pthread_create function returns, and it may be possible to hand over the thread number and system resources to another thread after it terminates, and then call Pthread_ The create thread gets the wrong thread number.

To set binding properties:

int Pthread_attr_init (pthread_attr_t *attr)
int Pthread_attr_setscope (pthread_attr_t *attr, int scope)
int Pthread_attr_getscope (pthread_attr_t *tattr, int *scope)
Scope:pthread_scope_system: Bind, this thread competes with all threads in the system pthread_scope_process: Unbound, which competes with other threads in the process

To set the Detach property:

int Pthread_attr_setdetachstate (pthread_attr_t *attr, int detachstate)
int pthread_attr_getdetachstate (const pthread_attr_t *tattr,int *detachstate)
Detachstate pthread_create_detached: Separation PTHREAD _create_joinable: Non-separation

To set up a scheduling policy:

int Pthread_attr_setschedpolicy (pthread_attr_t * tattr, int policy)
int Pthread_attr_getschedpolicy (pthread_attr_t * tattr, int *policy)
Policy Sched_fifo: First-in-First out SCHED_RR: Loop Sched_other: Method of implementing definition

Set Priority:

int Pthread_attr_setschedparam (pthread_attr_t *attr, struct Sched_param *param)
int Pthread_attr_getschedparam (pthread_attr_t *attr, struct Sched_param *param)

3. Thread access Control

1 Mutual exclusion Lock (mutex)
Synchronization between threads is achieved by locking mechanism. Only one thread is allowed to execute a critical part of the code at the same time.

1 int pthread_mutex_init (pthread_mutex_t *mutex,const pthread_mutex_attr_t *mutexattr);
2 int Pthread_mutex_lock (pthread_mutex_t *mutex);
3 int Pthread_mutex_unlock (pthread_mutex_t *mutex);
4 int Pthread_mutex_destroy (pthread_mutex_t *mutex);

(1) First initialize lock init () or static assignment pthread_mutex_t Mutex=pthread_mutex_initialier
(2) lock, Lock,trylock,lock blocking waiting lock, Trylock immediately return ebusy
(3) Unlock, unlock need to meet is lock state, and by the addition of lock line threads unlocked
(4) Clear the Lock, destroy (at this time the lock must be unlock, otherwise return ebusy)

The mutex is divided into recursive (recursive) and non-recursive (non-recursive), which is called POSIX, and the other name is Reentrant (reentrant) and non reentrant. There is no difference between the two types of mutexes as a line Cheng (inter-thread) synchronization tool, and their only difference is that the same thread can repeatedly lock the recursive mutex, but cannot repeatedly lock the non-recursive mutex.
The preferred non-recursive mutex is not intended to be performance, but to embody design intent. Non-recursive and recursive performance difference is not really small, because less with a counter, the former a little bit faster just. Locking the non-recursive mutex on the same line Chengri multiple times will immediately result in a deadlock, which I think is its advantage, which can help us think about the demands of the code to the lock and find the problem early (in the coding phase). There is no doubt that the recursive mutex is easier to use because it does not take into account that a thread will lock itself to death, which I guess is why Java and Windows default to provide recursive mutexes. (The intrinsic lock in the Java language is reentrant and can be reentrant in its concurrent library, which provides reentrantlock,windows critical_section.) None of them seem to provide lightweight non-recursive mutexes. )

2) condition variable (cond)
A mechanism for synchronizing with global variables shared between threads.

1 int pthread_cond_init (pthread_cond_t *cond,pthread_condattr_t *cond_attr);
2 int pthread_cond_wait (pthread_cond_t *cond,pthread_mutex_t *mutex);
3 int pthread_cond_timedwait (pthread_cond_t *cond,pthread_mutex_t *mutex,const timespec *abstime);
4 int Pthread_cond_destroy (pthread_cond_t *cond);
5 int pthread_cond_signal (pthread_cond_t *cond);
6 int pthread_cond_broadcast (pthread_cond_t *cond); Unblocking All threads


(1) initialization. Init () or pthread_cond_t cond=pthread_cond_initialier; property to null
(2) The waiting condition is set up. Pthread_cond_wait,pthread_cond_timedwait.
Wait () to release the lock and block the waiting condition variable for true
Timedwait () Set wait time, still not signal, return Etimeout (lock guarantee only one thread waits)
(3) Activate the condition variable: pthread_cond_signal,pthread_cond_broadcast (Activate all waiting threads)
(4) Clear the condition variable: destroy; Wireless thread Wait, otherwise return ebusy

Copy Code code as follows:

int pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex);
int pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, const struct TIMESPEC);

These two functions must be used within the locked area of the mutex.

When a call to Pthread_cond_signal () frees a conditionally blocked thread, calling pthread_cond_signal () does not work if no thread is blocking based on the condition variable. For Windows, if a setevent trigger Auto-reset Event condition is invoked, the function still works if no thread is blocked by the condition, and the condition variable is in the triggering state.

Producer consumer issues under Linux (using mutexes and conditional variables):

Copy Code code as follows:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include "Pthread.h"

#define BUFFER_SIZE 16

struct prodcons
{
int buffer[buffer_size];
pthread_mutex_t lock; Mutex ensuring exclusive access to buffer
int readpos,writepos; Position for reading and writing
pthread_cond_t Notempty; Signal when buffer are not empty
pthread_cond_t Notfull; Signal when buffer isn't full
};

Initialize a buffer
void init (struct prodcons* b)
{
Pthread_mutex_init (&b->lock,null);
Pthread_cond_init (&b->notempty,null);
Pthread_cond_init (&b->notfull,null);
B->readpos = 0;
B->writepos = 0;
}

//store An integer in the buffer
Void put (struct prodcons* b, int data)  

Pthread_mutex_lock (&am P;b->lock); 
//wait until buffer isn't full
while ((b->writepos+1)%buffer_size = = B->readpos)  

printf ("Wait for not full\n"); 
Pthread_cond_wait (&b->notfull,&b->lock ); 
}
B->buffer[b->writepos] = data; 
b->writepos++;
B->writepos%= buffer_size;
Pthread_cond_signal (&b->notempty);//signal buffer is not empty
Pthread_mutex_unlock (&b->lock) ; 
}

Read and remove an integer from the buffer
int get (struct prodcons* b)
{
int data;
Pthread_mutex_lock (&b->lock);
Wait until the buffer is not empty
while (B->writepos = = B->readpos)
{
printf ("Wait is not empty\n");
Pthread_cond_wait (&b->notempty,&b->lock);
}
data=b->buffer[b->readpos];
b->readpos++;
B->readpos%= buffer_size;
Pthread_cond_signal (&b->notfull); Signal buffer is isn't full
Pthread_mutex_unlock (&b->lock);
return data;
}

#define OVER-1

struct Prodcons buffer;

void * Producer (void * data)
{
int n;
for (n=0; n<50; ++n)
{
printf ("put-->%d\n", N);
Put (&buffer,n);
}
Put (&buffer,over);
printf ("producer stopped\n");
return NULL;
}

void * Consumer (void * data)
{
int n;
while (1)
{
int d = Get (&buffer);
if (d = over) break;
printf ("get-->%d\n", D);
}
printf ("Consumer stopped\n");
return NULL;
}

int main ()
{
pthread_t THA,THB;
void * RETVAL;

Init (&buffer);
Pthread_creare (&tha,null,producer,0);
Pthread_creare (&thb,null,consumer,0);

Pthread_join (Tha,&retval);
Pthread_join (Thb,&retval);

return 0;
}

3) Signal Volume
Like processes, threads can also communicate through semaphores, albeit lightweight.

The name of the semaphore function begins with "Sem_". There are four basic semaphore functions used by threads.

Copy Code code as follows:

#include <semaphore.h>
int Sem_init (sem_t *sem, int pshared, unsigned int value);

This is to initialize the semaphore specified by SEM, set its share option (Linux only supports 0, which means it is the local semaphore of the current process), and give it an initial value of.

Two atomic operation functions: Both functions are parameterized with a pointer to a semaphore object initialized by the Sem_init call.

Copy Code code as follows:

int sem_wait (sem_t *sem); To reduce the semaphore by 1 and call sem_wait for a semaphore of 0, this function waits until another thread makes it no longer 0.
int Sem_post (sem_t *sem); Add 1 to the semaphore value

int Sem_destroy (sem_t *sem);

The function is to clean it up again after we have used the semaphore. Return all the resources that you possess.

Using semaphores to realize producer consumers:

Here 4 semaphores are used, of which two semaphores occupied and empty are respectively used to solve the problem of synchronization between producer and consumer threads, Pmut for mutually exclusive issues between multiple producers, Cmut are used for mutual exclusion between multiple consumers. Where empty is initialized to n (the number of space elements of bounded buffer regions), occupied initialized to 0,pmut and Cmut initialized to 1.

Reference code:

Copy Code code as follows:

#define BSIZE 64

typedef struct
{
Char Buf[bsize];
Sem_t occupied;
sem_t empty;
int nextin;
int nextout;
Sem_t Pmut;
Sem_t Cmut;
}buffer_t;

buffer_t buffer;

void Init (buffer_t * b)
{
Sem_init (&b->occupied, 0, 0);
Sem_init (&b->empty,0, bsize);
Sem_init (&b->pmut, 0, 1);
Sem_init (&b->cmut, 0, 1);
B->nextin = b->nextout = 0;
}

void producer (buffer_t *b, char item)
{
Sem_wait (&b->empty);
Sem_wait (&b->pmut);
B->buf[b->nextin] = Item;
b->nextin++;
B->nextin%= bsize;
Sem_post (&b->pmut);
Sem_post (&b->occupied);
}

Char consumer (buffer_t *b)
{
char item;
Sem_wait (&b->occupied);
Sem_wait (&b->cmut);
item = b->buf[b->nextout];
b->nextout++;
B->nextout%= bsize;
Sem_post (&b->cmut);
Sem_post (&b->empty);
return item;
}

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.