Multi-threaded deadlock generation and how to avoid deadlocks

Source: Internet
Author: User

I. Definition of deadlockmultithreading and multi-processes improve the utilization of system resources and improve the processing power of the system. However, concurrent execution also brings new problems-deadlocks. A deadlock is a deadlock (waiting for each other) of multiple threads due to competing resources, which cannot be pushed forward without external forces.

Below we have some examples to illustrate the deadlock phenomenon.

First look at an example of life, 2 people eat together but only a pair of chopsticks, 2 people take turns to eat (at the same time have 2 chopsticks to eat). Some time, a left chopsticks, a person took the right chopsticks, 2 people are occupying a resource at the same time, waiting for another resource, this time a while waiting for B to eat and release its possession of chopsticks, the same, B is also waiting for a eat and release its possession of chopsticks, so caught in a dead cycle, who can not continue to eat ...
A similar situation exists in the computer system. For example, a computer system with only one printer and one input device, process P1 is occupying the input device, but also the request to use the printer, but at this time the printer is occupied by the process P2, and P2 before releasing the printer, and asked to use the input device is being P1 occupied. In this way, two processes are endlessly waiting for each other to continue, and two processes are locked out of a deadlock state. second, the cause of the deadlock1) Competition of system resourcesthe number of non-deprived resources in the system is not enough to meet the needs of multiple processes running, so that the process can be bogged down by competing for resources, such as tape drives, printers, etc. Only the competition for the inalienable resources is likely to create deadlocks, and the competition for the deprivation of resources will not cause deadlocks. 2) process promotion sequence illegalThe process is running, and the order of requests and releases of resources is incorrect, which can also lead to deadlocks. For example, concurrent processes P1, P2 maintain resource R1, R2 respectively, and process P1 request resource R2, process P2 request Resource R1, both will be blocked because the required resources are occupied.

Improper use of semaphores can also cause deadlocks. As the process waits for messages from each other, the results can also make it impossible to move forward between these processes. For example, process a waits for a message from process B, and process B waits for a message from process A to see that processes A and B are not competing for the same resource, but are waiting for the other person's resources to cause a deadlock. 3) The necessary conditions for deadlock generationA deadlock must meet the following four conditions, so long as any of these conditions are not true, the deadlock will not occur.
    • Mutex condition: The process requires exclusive control over the allocated resources (such as printers), that is, for a period of time, a resource is occupied by only one process. If another process requests the resource, the request process can wait only.
    • No deprivation condition: the resources obtained by the process cannot be forcibly taken away by other processes until they are exhausted, that is, only Is freed by the process that obtains the resource (only active release).
    • request and hold condition: The process has maintained at least one resource, but a new resource request has been made, and the resource has been occupied by other processes. At this point the request process is blocked, but the resources that you have obtained are persisted.
    • cyclic wait condition: there is a cyclic wait chain for a process resource, and the resources that each process in the chain have obtained are requested by the next process in the chain. That is, there is a process collection in wait state {Pl, P2, ..., pn}, where Pi waits for a resource to be occupied by P (i+1) (i=0, 1, ..., n-1), and the resource for PN waits is P0 occupied, 2-15 is shown.

Visually, the cyclic wait condition seems to be the same as the definition of a deadlock. The conditions required to make a wait ring as defined by the deadlock definition are stricter, requiring that the resources that the PI waits for must be met by P (i+1), while the cyclic wait condition has no such limit. For example, the system has two output devices, P0 occupies one, PK occupies another, and K does not belong to the set {0, 1, ..., n}.

PN waits for an output device, which can be obtained from P0, or it may be obtained from PK. Therefore, although the PN, P0 and some other processes form a loop waiting circle, but PK is not in the circle, if the PK released the output device, you can break the loop wait, 2-16 shown. So the loop wait is just a necessary condition for a deadlock.


Resource allocation diagram There is a loop and the system does not necessarily have a deadlock because the number of homogeneous resources is greater than 1. However, if there is only one resource for each type of resource in the system, the graph of resource allocation will become a sufficient and necessary condition for the deadlock of the system.

third, how to avoid deadlock

In some cases deadlocks can be avoided. Three techniques to avoid deadlocks:

    1. Lock order (threads locking in a certain order)
    2. Lock time limit (when a thread attempts to acquire a lock with a certain time limit, the request for the lock is discarded and the lock is freed)
    3. Deadlock detection

Lock Order

Deadlocks can easily occur when multiple threads require the same locks, but are locked in different order.

If you can ensure that all threads are getting locks in the same order, the deadlock does not occur. Look at the following example:

Thread 1:  lock A   lock Bthread 2:   wait for a   lock C (when a locked) thread 3:   wait for a   wait for b< C7/>wait for C

If a thread (such as thread 3) needs some locks, it must acquire the locks in the order in which they are determined. It can only get the back lock after it has obtained a lock in order from the front.

For example, thread 2 and thread 3 can only attempt to acquire lock a after acquiring lock a C (translator note: Acquiring lock A is necessary for acquiring lock C). Because thread 1 already has lock a, threads 2 and 3 need to wait until lock A is released. Then, before they attempt to lock B or C, a lock must be successfully added to a.

Sequential lock-In is an effective mechanism for deadlock prevention. However, this approach requires you to know in advance all the locks that may be used (the translator notes: and to sort the locks appropriately), but there are times when they are unpredictable.

Lock time limit

Another way to avoid deadlocks is to add a timeout when trying to acquire a lock, which means that the thread will discard the lock request if it exceeds the time limit in the attempt to acquire the lock. If a thread does not successfully acquire all the required locks within a given time frame, it will rollback and release all acquired locks, and then wait for a random time to retry. This random wait time gives other threads a chance to try to acquire the same locks, and allows the app to continue running without acquiring a lock (the translator notes: The lock timeout allows you to continue doing something else and then go back and repeat the previously locked logic).

Here is an example of a scenario where two threads try to get the same two locks in different order, fall back and retry after a timeout has occurred:

Thread 1 Locks athread 2 locks bthread 1 attempts to lock B Yes Blockedthread 2 attempts to lock A Yes Blockedthread 1 ' s lock attempt on B times Outthread 1 backs up and releases A as Wellthread 1 waits randomly (e.g. 257 Millis) before R Etrying. Thread 2 ' s lock attempt on A times Outthread 2 backs up and releases B as Wellthread 2 waits randomly (e.g. Millis) BEF Ore retrying.

In the example above, thread 2 retries the lock by 200 milliseconds earlier than thread 1, so it succeeds in acquiring two locks first. At this point, thread 1 attempts to acquire lock A and is in a wait state. When thread 2 ends, thread 1 can also obtain the two locks successfully (unless thread 2 or other thread 1 succeeds in acquiring some of the locks before the two locks are successfully obtained).

It is important to note that due to the time-out of the lock, we cannot assume that this scenario must have been deadlocked. It may also be because the thread that acquired the lock (which causes other lines to blocks until those) takes a long time to complete its task.

In addition, if there are very many threads competing for the same resource at the same time, even if there is a timeout and fallback mechanism, it may cause these threads to try repeatedly but never lock. This behavior may not occur if there are only two threads, and the retry time-out is set to between 0 and 500 milliseconds, but if it is 10 or 20 threads, the situation is different. Because these threads have a much higher probability of waiting for equal retry times (or are very close to having problems).
(Translator Note: The timeout and retry mechanism is to avoid competition at the same time, but when there are many threads, the time-out of two or more threads is the same or the likelihood of approaching is very large, so even if the competition causes a timeout, they will start to retry at the same time because of the timeout, resulting in a new round of competition, has brought about new problems. )

There is a problem with this mechanism, and in Java it is not possible to set the timeout time for the synchronized synchronization block. You need to create a custom lock or use the tool under the Java.util.concurrent package in Java5. Writing a custom lock class is not complex, but beyond the scope of this article. Subsequent Java Concurrency series will cover the contents of the custom lock.

Deadlock Detection

Deadlock detection is a better mechanism for deadlock prevention, primarily for scenarios where sequential locking and lock timeouts are not possible.

Whenever a thread obtains a lock, it is written down in the data structure (map, graph, and so on) associated with the lock. In addition, whenever a thread requests a lock, it needs to be recorded in this data structure.

When a thread requests a lock failure, the thread can traverse the lock's diagram to see if a deadlock has occurred. For example, thread a requests lock 7, but lock 7 is held by thread B at this time, so thread A can check if thread B has requested a lock currently held by thread A. If thread B does have such a request, then a deadlock occurs (thread A has lock 1, request lock 7, thread B has lock 7, request lock 1).

Of course, deadlocks are typically more complex than two threads holding each other's locks. Thread A waits for thread B, thread B waits for thread C, thread C waits for thread d, and thread D waits for thread A. Thread A in order to detect deadlocks, it requires a progressive detection of all locks requested by B. Starting with the lock requested by thread B, thread a found thread C and then found thread D, and found that thread D requested the lock to be held by thread A. This is what it knew happened to be a deadlock.

Here is a diagram of the lock occupancy and request between four threads (A,b,c and D). Data structures like this can be used to detect deadlocks.

So what do these threads do when a deadlock is detected?

One possible option is to release all locks, rollback, and wait for a random time to retry. This is similar to a simple lock-out timeout, except that a deadlock has occurred before it can be rolled back, not because the lock request timed out. There are fallback and wait, but if there are a large number of threads competing for the same lock, they will still be repeatedly deadlocked (editor's note: The reason is similar to the timeout and cannot fundamentally mitigate the competition).

A better solution is to prioritize these threads so that one (or several) of the threads fall back and the rest of the threads continue to hold the locks they need, just as they do without a deadlock. If the priority given to these threads is fixed, the same batch of threads will always have a higher priority. To avoid this problem, you can set a random priority when a deadlock occurs.


Multi-threaded deadlock generation and how to avoid deadlocks

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.