Linux Threads: Synchronization and Mutex

Source: Internet
Author: User

First, let's take a look at what synchronization is:

In fact, the so-called synchronization, is the issue of a function call, before the result is not obtained, the call will not return, while other threads can not call this method. By this definition, the vast majority of functions are synchronous calls (such as sin, isdigit, etc.). But generally speaking, we are talking about synchronous and asynchronous tasks, especially those that require other components to collaborate or need to be done in a certain amount of time. For example, the Window API function SendMessage. The function sends a message to a window that does not return until the other party finishes processing the message. When the other party finishes processing, the function returns the LRESULT value returned by the message handler function to the caller.

That is, for the same problem that needs to be solved, but we need multiple execution streams for simultaneous operation, so we need to compute the task and finally get a final result, which is equivalent to the root evaluation of the tree. The only difference is the access between the threads/processes, there is always access to the same data, if the current computer access problems there is inconsistent atomicity, there will be a preemption return on our resources or deadlock problems, this we mentioned later.


The simple thing is that synchronization is where multiple threads work together to accomplish a task in the agreed order. That is, threading control and creation.


Then we look at a piece of code:

#include <stdio.h> #include <stdlib.h> #include <pthread.h> #define NLOOP 5000static int g_count = 0; void* read_write_mem (void *_val) {int val = 0;int i = 0;for (; i<nloop;++i) {val = g_count;p rintf ("Pthread ID is:%x,coun T is:%d\n ", (unsigned Long) pthread_self (), g_count); g_count = Val +1;} return NULL;} int main () {pthread_t tid1;pthread_t tid2;pthread_create (&tid1,null,read_write_mem,null);p thread_create (& Tid2,null,read_write_mem,null);p thread_join (tid1,null);p thread_join (tid2,null);p rintf ("Count final Val is:%d\n", g _count); return 0;}

is to create 2 threads, and then work together, the number of operations 5,000 times, the first thing to note is.

If not: val = g_count;

G_count = val+1;

Instead of direct ++g_count, the operating system responds quickly to the lack of register store value conversions. will not be able to reflect the process of deprivation error.

Then the result is this:

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M00/7F/A7/wKiom1cn_gzwCCsrAAAz3d5FjzI759.png "title=" QQ picture 20160503092513.png "alt=" Wkiom1cn_gzwccsraaaz3d5fjzi759.png "/>

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/> The final result is not 10000. Then in order to solve the problem of resource acquisition non-atomicity. You need to understand the concept of thread exclusion.


What is mutual exclusion?

In fact, mutual exclusion from the literal meaning is that 2 threads are mutually exclusive, in fact, mutual exclusion really refers to the critical section of the critical resource operation access to the uniqueness, that is, when a thread in the lock code segment (critical section) access to the common critical resources, there is only access to seize this access, To allow other threads to enter the preemption. That is the principle of lock.


About the thread mutex related functions:

#include <pthread.h>int Pthread_mutex_destroy (pthread_mutex_t *mutex); int Pthread_mutex_init (pthread_mutex_t *restrict mutex,const pthread_mutexattr_t *restrict attr);//attr general use the default value is nullpthread_mutex_t Mutex = Pthread_mutex_ INITIALIZER;

Where Pthread_mutex_initializer must initialize, which represents the creation of the lock.

Lock, go to lock function

#include <pthread.h>int pthread_mutex_lock (pthread_mutex_t *mutex);//block int Pthread_mutex_trylock (pthread_ mutex_t *mutex);//non-blocking int pthread_mutex_unlock (pthread_mutex_t *mutex);

Then we change the last piece of code to lock and see what the effect looks like:

 #include  <stdio.h> #include  <stdlib.h> #include   <pthread.h> #define  NLOOP 5000static int g_count = 0;pthread_mutex_t  Mutex_lock = pthread_mutex_initializer;void* read_write_mem (Void *_val) {int val  = 0;int i = 0;for  (; i<nloop;++i) {pthread_mutex_lock (&mutex_lock); val =  g_count ;p rintf ("pthread id is :%x,count is :%d\n", (Unsigned long  ) pthread_self (), g_count); G_count = val +1;pthread_mutex_unlock (&mutex_lock);} Return null;} Int main () {pthread_t tid1;pthread_t tid2;pthread_create (&tid1,null,read_write_mem,null); Pthread_create (&tid2,null,read_write_mem,null);p thread_join (tid1,null);p thread_join (Tid2,NULL);p rintf (" Count final val is :%d\n ", g_count); return 0;} 


Operation Result:

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/7F/A5/wKioL1coAa7gPxpUAAC4WqInUxQ275.png "title=" QQ picture 20160503093720.png "alt=" Wkiol1coaa7gpxpuaac4wqinuxq275.png "/>

Obviously, the result becomes 10000, the reading and storage above the resource embodies the phenomenon of mutual exclusion of resources and the monopoly of operation.

Knowing the thread mutex, let's look at the implementation mechanism of the mutex:

How does the two basic operations of the mutex lock and unlock work? Assuming that the value of the mutex variable is 1 means that the mutex is idle, a process call to lock can obtain the lock, and a value of 0 for the mutex means that the mutex has been obtained by a thread. Another thread calls lock only to suspend wait. Then the pseudo code for lock and unlock is as follows:

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M00/7F/A9/wKiom1coJ_qjv1zVAACIBeoCPyg574.png "title=" QQ picture 20160503122408.png "alt=" Wkiom1coj_qjv1zvaacibeocpyg574.png "/>

The steps for waking the waiting thread in the unlock operation can have different implementations, either waking up only one waiting thread, or waking up all the threads that are waiting for the mutex, and then letting those threads that are awakened to compete for the mutex, and the competing failed thread continues to suspend waiting.

Careful readers should have seen the problem: reading, judging, and modifying a mutex variable is not an atomic operation. If two threads call lock at the same time, then the mutex is 1, and two threads determine that mutex>0 is set up, and then one of the threads mutex=0, and the other thread does not know the situation, and mutex=0, so two threads think they have

Lock.

In order to achieve mutex operation, most architectures provide swap or exchange instructions, the role of which is to exchange registers and memory unit data, because there is only one instruction, guaranteed atomicity, even a multiprocessor platform, access to memory bus cycle has successively, Switching instructions on one processor can only wait for bus cycles when the exchange instruction of another processor is executed. Now let's change the pseudo-code for lock and unlock (take x86 's XCHG instruction as an example):

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/7F/A9/wKiom1coKJbz_6cZAAEfeszhW6s842.png "title=" QQ picture 20160503122651.png "alt=" Wkiom1cokjbz_6czaaefeszhw6s842.png "/>

In fact, the real mechanism of the lock is to use the exchange to ensure that 1 of the token can be made and only one, there will be no access disorder.


In general, if the same thread calls lock two times, on the second call, because the lock is already occupied, the thread suspends waiting for the other thread to release the lock, but the lock is occupied by itself, and the thread is suspended without a chance to release the lock, so it is always suspended waiting, which is called a deadlock ( Deadlock). Another typical deadlock scenario is when thread a obtains lock 1, thread B obtains a lock of 2, and thread A calls lock to attempt to acquire lock 2, and the result is to suspend waiting for thread B to release lock 2, while thread B also calls the lock graph to get lock 1, and the result is that it needs to suspend waiting for thread A to release lock 1. So threads A and B are always in a suspended state. It is not difficult to imagine that if it involves more threads and more locks, there is no possibility that the deadlock problem will become complex and difficult to judge.

When writing a program, you should try to avoid having multiple locks at the same time, and if it is necessary to do so, there is a principle that if all threads acquire locks in the same order when multiple locks are required (most commonly by the order of address of the mutex variable), no deadlock occurs. For example, in a program used to lock 1, lock 2, lock 3, they correspond to the address of the mutex variable lock 1< lock 2< Lock 3, then all the threads need to obtain 2 or 3 locks at the same time should be in lock 1, lock 2, lock 3 order obtained. If it is difficult to determine a sequence for all locks, you should try to make pthread_mutex_trylock calls instead of Pthread_mutex_lock calls in order to avoid deadlocks.


This article is from the "egg-left" blog, please be sure to keep this source http://memory73.blog.51cto.com/10530560/1769667

Linux Threads: Synchronization and Mutex

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.