A detailed tutorial on Linux multithreaded programming

Source: Internet
Author: User
Tags error code function prototype mutex printf relative require terminates thread

  This article mainly introduces the Linux multithread programming detailed tutorial, provides the code which the thread realizes the communication through the signal quantity, everybody reference uses the bar

Thread classification   threads can be divided into user-level and core-level threads according to their dispatchers.   (1) User-level thread   user-level thread is mainly to solve the problem of context switching, its scheduling algorithm and scheduling process are all determined by the user choice, in the runtime does not require specific kernel support. Here, the operating system often provides a line threading of user space, the line threading provides the function of thread creation, dispatch, Undo, and the kernel still only manages the process. If one of the threads in a process calls a blocking system call, the process, including all other threads in the process, is also blocked. The main disadvantage of this type of user-level thread is the inability to perform multiprocessor advantages in the scheduling of multiple threads in a process.   (2) core-level threads   This kind of thread allows threads in different processes to be scheduled in the same relative priority scheduling method, thus giving the concurrency advantage of multiprocessor.   Now most systems use user-level and core-level threads. A user-level thread can correspond to one or several core-level threads, the "one-to-one" or "many-to-many" model. This can not only meet the needs of multiprocessor, but also minimize the scheduling overhead.   Linux thread implementation is carried out at the core, which provides the interface to create the process do_fork (). The kernel provides two system invoke clones () and fork (), which eventually invoke the Do_fork () kernel API with different parameters. Of course, in order to implement threads, there is no core for multiple processes (in fact, lightweight processes) to share data segment support is not, therefore, do_fork () provides a number of parameters, including CLONE_VM (shared memory space), CLONE_FS (shared file system information), Clone_ Files (shared file descriptor tables), Clone_sighand (Shared signal handle table), and clone_pid (shared process ID, valid only for the nuclear process, that is, the No. 0 process). When using the fork system call, the kernel calls Do_fork () does not use any shared properties, the process has a separate running environment, and when the thread is created using Pthread_create (), all of these properties are eventually set to invoke __clone (). These parameters are all passed to Do_fork () in the kernel, thus creating a "process" that has a shared running environment in which only the stack is self-contained and passed by __clone ().   Linux threads exist in the kernel as a lightweight process, with separate process table entries, and all creation, synchronization, and deletion operations are performed in the Pthread library. The Pthread library uses a management thread (__pthread_manager (), eachProcesses are independent and unique to manage thread creation and termination, assign thread IDs to threads, send thread-related signals (such as cancel), and callers of the main thread (Pthread_create ()) Pass the request information to the management thread through a pipe.   Main function Description   1. Thread creation and exit   Pthread_create thread creation function int pthread_create (pthread_t * thread_id,__const pthread_attr_t * __attr,void * (*__ Start_routine) (void *), void *__restrict __arg);   Thread creation function The first argument is a pointer to the thread identifier, the second parameter is used to set the thread properties, the third parameter is the starting address of the thread-running function, and the last parameter is the parameter of the running function. Here, our function thread does not require arguments, so the last argument is set to a null pointer. The second argument is also set to a null pointer, which will generate the thread for the default property. When the line Cheng is created, the function returns 0, if not 0, the creation thread fails, and the common error return code is eagain and einval. The former indicates a system restriction to create a new thread, such as an excessive number of threads, which indicates that the second parameter represents an illegal thread property value. After the creation of the thread succeeds, the newly created thread runs the function of parameter three and parameter four, and the original thread continues to run the next line of code.   Pthread_join function to wait for the end of a thread. The function prototype is: int pthread_join (pthread_t __th, void **__thread_return) The first parameter is the waiting thread identifier, the second parameter is a user-defined pointer, which can be used to store the return value of the waiting thread. This function is a thread-blocking function, and the function that calls it waits until the thread that is waiting is finished, and when the function returns, the resource of the waiting thread is retracted. Threads can only be terminated by one thread and should be in the joinable state (not detached).   Pthread_exit function There are two ways to end a thread, one is when the thread is running and the thread that calls it ends, and the other way is through the function pthread_exit. Its function prototype is: void Pthread_exit (void *__retval) The only parameter is the return code of the function, as long as the second parameter in Pthread_join Thread_retUrn is not NULL, this value will be passed to Thread_return. Finally, one thread cannot be waited by multiple threads, otherwise the first line that receives the signal Cheng returned, and the rest of the Pthread_join thread returns the error code esrch.   2. Thread Properties   Pthread_create function The second parameter of the thread's properties. Set this value to NULL, that is, the default property, where multiple properties of a thread can be changed. These properties mainly include binding properties, detach attributes, stack addresses, stack sizes, and precedence. The default properties of the system are unbound, non-detached, stack of the default 1M, and the same level of precedence as the parent process. The following is an explanation of the basic concepts of binding attributes and detached attributes first.     Binding properties: Linux uses a "one-to-one" threading mechanism, which is a user thread that corresponds to a kernel thread. A binding property means that a user thread is fixed to a kernel thread, because the scheduling of CPU time slices is oriented to kernel threads (that is, lightweight processes), so a thread with bound attributes can guarantee that a kernel thread always corresponds to it when needed. The relative unbound attribute means that the relationship between the user thread and the kernel thread is not always fixed, but the system controls the allocation.     Detach attribute: The detach attribute is used to determine how a thread terminates itself. In a discrete case, when a thread ends, the system resources it occupies are not released, that is, there is no real termination. The created thread can release the system resources it occupies only when the Pthread_join () function returns. In the case of a detach attribute, the system resource that it occupies is immediately released at the end of a thread. The point to note here is that if you set the Detach property of a thread and the thread runs very fast, it is likely to terminate before the Pthread_create function returns, and it may be possible to hand over the thread number and system resources to another thread after it terminates, and then call Pthread_ The create thread gets the wrong thread number.   Set binding properties:   int pthread_attr_init (pthread_attr_t *attr)   int Pthread_attr_setscope (pthread_attr_t *attr , int scope) int Pthread_attr_getscope (pthread_attr_t *tattr, int *scope) Scope:pthread_scope_system: binding, this thread is associated with theAll threads in the system compete pthread_scope_process: unbound, this thread competes with other threads in the process   sets the Detach property:   int pthread_attr_setdetachstate (pthread_ attr_t *attr, int detachstate) int pthread_attr_getdetachstate (const pthread_attr_t *tattr,int) Detachstate pthread_create_detached: Separation PTHREAD _create_joinable: Non-separation   Setup scheduling policy:   int PTHREAD_ATTR_ Setschedpolicy (pthread_attr_t * tattr, int policy) int Pthread_attr_getschedpolicy (pthread_attr_t * tattr, int *policy) p Olicy Sched_fifo: First-in-First out SCHED_RR: Loop sched_other: Implementation-defined method   setting precedence:   int Pthread_attr_setschedparam (pthread_attr _t *attr, struct sched_param *param)   int Pthread_attr_getschedparam (pthread_attr_t *attr, struct Sched_param *para m)   3. Thread access control   1 mutex (mutex) implements synchronization between threads through a locking mechanism. Only one thread is allowed to execute a critical part of the code at the same time.   1 int pthread_mutex_init (pthread_mutex_t *mutex,const pthread_mutex_attr_t *mutexattr); 2 int Pthread_mutex_lock (pthread_mutex_t *mutex); 3 int Pthread_mutex_unlock (pthread_mutex_t *mutex); 4 int Pthread_mutex_destroy (pthread_mutex_t *mutex);   (1) First initialize lock init () or static assignment pthread_mutex_t Mutex=pthread_mutex_initialier (2) lock, Lock,trylock,lock blocking wait lock, Trylock immediately return ebusy (3) Unlock, unlock need to be satisfied is lock state, and by the lock line threads unlocked (4) Clear the Lock, destroy (at this time the lock must unlock, otherwise return ebusy)   Mutex is divided into recursion (recursive) and non-recursive (non-recursive), which is called POSIX, and the other name is Reentrant (reentrant) and non reentrant. There is no difference between the two types of mutexes as a line Cheng (inter-thread) synchronization tool, and their only difference is that the same thread can repeatedly lock the recursive mutex, but cannot repeatedly lock the non-recursive mutex. The preferred non-recursive mutex is not intended to be performance, but to embody design intent. Non-recursive and recursive performance difference is not really small, because less with a counter, the former a little bit faster just. Locking the non-recursive mutex on the same line Chengri multiple times will immediately result in a deadlock, which I think is its advantage, which can help us think about the demands of the code to the lock and find the problem early (in the coding phase). There is no doubt that the recursive mutex is easier to use because it does not take into account that a thread will lock itself to death, which I guess is why Java and Windows default to provide recursive mutexes. (The intrinsic lock in the Java language is reentrant and can be reentrant in its concurrent library, which provides reentrantlock,windows critical_section.) None of them seem to provide lightweight non-recursive mutexes. )   2) condition variable (cond) a mechanism for synchronizing with global variables shared between threads.   1 int pthread_cond_init (pthread_cond_t *cond,pthread_condattr_t *cond_attr);  2 int pthread_cond_wait ( pthread_cond_t *cond,pthread_mutex_t *mutex); 3int pthread_cond_timedwait (pthread_cond_t *cond,pthread_mutex_t *mutex,const timespec *abstime); 4 int Pthread_cond_destroy (pthread_cond_t *cond);   5 int pthread_cond_signal (pthread_cond_t *cond); 6 int pthread_cond_broadcast (pthread_cond_t *cond);  //unblocked     (1) Initialization of all threads. Init () or pthread_cond_t Cond=pthread_cond_initialier property to null (2) wait for the condition to be set. pthread_cond_wait,pthread_cond_timedwait.  Wait () to release the lock, and to block waiting condition variable for true timedwait () Set wait time, still not signal, return etimeout ( Lock guarantee only one thread wait) (3) Activate the condition variable: pthread_cond_signal,pthread_cond_broadcast (Activate all Waiting Threads) (4) Clear the condition variable: destroy; Wireless thread Wait, otherwise return ebusy     code as follows: Int pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex); int pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, const struct TIMESPEC);     These two functions must be used within the locked area of the mutex.   Calling Pthread_cond_signal () frees a conditionally blocked thread, calling pthread_cond_signal () does not work if no thread is blocking based on a conditional variable. For Windows, if a setevent trigger Auto-reset Event condition is invoked, the function still works if no thread is blocked by the condition, and the condition variable is in the triggering state.  Linux producer consumer issues (using mutexes and conditional variables):   Code is as follows: #include <stdio.h> #include <stdlib.h> #include <time.h& Gt #include "pthread.h"   #define BUFFER_SIZE   struct prodcons   {  int buffer[buffer_size];   pthread_mutex_t lock;  //mutex ensuring exclusive access to buffer int readpos,writepos;  //position for reading and writing pthread_cond_t Notempty;  //signal when the buffer is not empty pthread_cond_t notfull;  //signal when the buffer is isn't full};    //initialize a buffer void init (struct prodcons* b)   {  Pthread_mutex_init (&b->lock,null) ;   Pthread_cond_init (&b->notempty,null);   Pthread_cond_init (&b->notfull,null);   B->readpos = 0;   B->writepos = 0;  }    //store an integer in the buffer void (struct prodcons* b, int data)   {  Pthread_mut Ex_lock (&b->lock);  //wait until buffer is isn't full (b->writepos+1%buffer_size = = B->readpos)   {  printf ("Wait for not Fulln");   Pthread_cond_wait (&b->notful L,&b->lock);  } B->buffer[b->writepos] = data;   b->writepos++; B->writepos%= buffer_size; Pthread_cond_signal (&b->notempty); Signal buffer is not empty pthread_mutex_unlock (&b->lock);  }  //read and remove an integer from the buffer int get (struct prodcons* b)   {  int data;   Pthread_mutex_lock (&b->lock);  //wait until buffer is isn't empty while (B->writepos = = B->readpos)   {  printf ("Wait for not Emptyn ");   Pthread_cond_wait (&b->notempty,&b->lock);  } data=b->buffer[b->readpos];   b->readpos++; B->readpos%= buffer_size; Pthread_cond_signal (&b->notfull);  //signal buffer is isn't full pthread_mutex_unlock (&b->lock);   return data; }   #define OVER-1   struct prodcons buffer;     void * Producer (void * data)   {  int n;   for (n=0 n<50; ++n)   {printf ("Put-->%dn", n); &nbsp ; Put (&buffer,n);  }   put (&buffer,over);   printf ("producer Stoppedn");   return NULL;  }     void * Consumer (void * data)   {  int n;   while (1)   {  int d = Get (&am P;buffer);   if (d = over) break;   printf ("Get-->%dn", D);  } printf ("Consumer Stoppedn");   return NULL;  }     int main ()   {  pthread_t THA,THB;   void * retval;     Init (&buffer) ;   Pthread_creare (&tha,null,producer,0);   Pthread_creare (&thb,null,consumer,0);     Pthread_join (tha,&retval);   Pthread_join (Thb,&retval);     return 0;  }       3 semaphores As with processes, threads can also communicate through semaphores, albeit lightweight. The   semaphore functions begin with the name "Sem_". There are four basic semaphore functions used by threads.     Code as follows: #include <semaphore.h> int Sem_init (sem_t *sem, int pshared, unsigned int value);   This is to initialize the semaphore specified by SEM, set its share option (Linux only supports 0, which means it is the local semaphore of the current process), and give it an initial value.   Two atomic action functions: Both functions are parameterized with a pointer to a semaphore object initialized by the Sem_init call.       Code as follows: Int sem_wait (sem_t *sem); To reduce the semaphore by 1 and call sem_wait for a semaphore of 0, this function waits until another thread makes it no longer 0. int Sem_post (sem_t *sem); Give the signal value plus 1   int Sem_destroy (sem_t *sem);       The function is to clean it up again after we run out of semaphores. Return all the resources that you possess.   using semaphore to achieve producer consumers:   This uses 4 semaphores, of which two semaphores occupied and empty are respectively used to solve the problem of synchronization between producer and consumer threads, pmut for mutual exclusion between multiple producers, Cmut is a mutual exclusion problem for multiple consumers. Where empty is initialized to n (the number of space elements of bounded buffer regions), occupied initialized to 0,pmut and Cmut initialized to 1.   Reference code:     Code as follows: #define BSIZE   typedef struct  {char buf[bsize]; sem_t occupied; sem_t EMP Ty int nextin; int nextout; Sem_t Pmut; Sem_t Cmut; }buffer_t;   buffer_t buffer;   void init (buffer_t * b) {sem_init (&b->occupied, 0, 0); Sem_init (&b->empty,0, bsize); Sem_init (& B->pmut, 0, 1); Sem_init (&b->cmut, 0, 1); B->nextin = B->nextout = 0; }   void producer (buffer_t *b, char item)   {sem_wait (&b->empty); sem_wait (&b->pmut); b->buf[ B->nextin] = Item; b->nextin++; B->nextin%= bsize; Sem_post (&b->pmut); Sem_post (&b->occupied); }   Char consumer (buffer_t *b)   {char item; sem_wait (&b->occupied); sem_wait (&b->cmut); item = b >buf[b->nextout]; b->nextout++; B->nextout%= bsize; Sem_post (&b->cmut); Sem_post (&b->empty); return item; }  

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.