Deadlock and live lock
Deadlock
Two or more processes are waiting for each other due to resource competition during execution. If there is no external force, they will not be able to proceed. It is said that the system is in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes. Because resource usage is mutually exclusive, after a process applies for resources, the relevant process will never be allocated with necessary resources and cannot continue to run without external assistance, this produces a special phenomenon: deadlock."
Although a deadlock may occur during the process, the deadlock must meet certain conditions, and the occurrence of the deadlock must meet the following four conditions.
1) mutex condition: A process schedules and uses the allocated resources. That is, a resource is only occupied by one process within a period of time. If there are other processes requesting resources at this time, the requester can only wait until the processes occupying the resources are released after use.
2) request and retention conditions: the process has maintained at least one resource, but has put forward new resource requests, and the resource has been occupied by other processes. At this time, the request process is blocked, however, you cannot release other resources that you have obtained.
3) Non-deprivation condition: resources obtained by a process cannot be deprived until they are used up. They can only be released after use.
4) loop wait condition: When a deadlock occurs, there must be a process-a loop chain of resources, that is, a process set {P0, P1, P2 ,···, p0 in Pn} is waiting for resources occupied by P1; P1 is waiting for resources occupied by P2 ,......, Pn is waiting for resources occupied by P0.
Understanding the cause of the deadlock, especially the four necessary conditions for the deadlock, can avoid, prevent and remove the deadlock as much as possible. Therefore, in terms of system design and process scheduling, pay attention to how to prevent these four necessary conditions from being established and how to determine reasonable resource allocation algorithms to avoid permanent occupation of system resources by processes. In addition, it is necessary to prevent the process from occupying resources while waiting. During system operation, it dynamically checks the resource applications that each system can meet by the process, determine whether to allocate resources based on the check results. If the system may experience a deadlock after allocation, no allocation will be made; otherwise, the allocation will be made. Therefore, reasonable planning should be made for resource allocation.
Ordered Resource Allocation Method
The algorithm resources are numbered according to all resources in a rule system (for example, the printer is 1, the drive is 2, the disk is 3, and so on). The request must be in the above ascending order. System Requirement application process:
1. All resources that must be used and belong to the same category must be applied for at one time;
2. When applying for different types of resources, you must apply for the numbers of devices in sequence. For example, process PA uses resources in the order of R1 and R2. Process PB uses resources in the order of R2 and R1. If dynamic allocation is used, loop conditions may be formed, resulting in deadlocks.
The ordered resource allocation method is used: the numbers of R1 are 1, and those of R2 are 2;
PA: application order: R1, R2
PB: application order: R1, R2
In this way, the loop conditions are damaged and deadlocks are avoided.
Banking Algorithms
The most representative algorithm to avoid deadlocks is the Banker algorithm proposed by Dijkstra E.W in 1968:
This algorithm needs to check the applicant's maximum demand for resources. If the system's existing resources can meet the applicant's request, the applicant's request is met.
In this way, the applicant can quickly complete the calculation, and then release the resources it occupies, thus ensuring that all processes in the system can be completed, so the deadlock can be avoided.
Livelock)
Things 1 can use resources, but it allows other things to use resources first; things 2 can use resources, but it also allows other things to use resources first, so the two have always been modest, cannot use resources.
The so-called hunger means that if transaction T1 blocks data R and transaction T2 requests to block R, then T2. T3 also requests to block R. When T1 releases the block on R, the system first approves the T3 request, and t2. T4 then requests to block R. After T3 releases the block on R, the system approves the T4 request... T2 may always wait, which is hunger.
There is a chance to unlock the live lock. The deadlock cannot be solved.
A simple way to avoid a live lock is to adopt a first-come-first-served policy. When multiple transaction requests block the same data object, the lock subsystem queues the transaction in the order of request blocking. Once the lock on the data object is released, the application queue is approved to obtain the lock for the first transaction.