Lock Convoy (lock Guard)
[1]lock Convoy is a performance degradation problem caused by the use of locks in a multithreaded concurrency environment. Lock convoy problems can arise when multiple threads of the same priority frequently scramble for the same lock, and in general, the lock convoy does not cause application logic to stop, as in deadlock or livelock, but instead suffers from lock Convoy systems or applications are still running forward, however, because threads frequently scramble for locks causing excessive thread-environment switching, which makes the system less efficient, and if there are threads that do not participate in the lock scramble at the same priority, they can gain relatively more processor resources. Thus the unfairness of system scheduling is caused. Lock Convoy describes a situation where the fake ABC three threads are locked multiple times within the same time slice, without regard to the priority (assuming the scheduling rule is average time), then a time slice of 1/3 is interrupted by the scheduled process, and the execution of B,B also executes 1 /3 of the time slice after being interrupted by the dispatch process, execute c. So three threads will always be interrupted without executing the time slice. (In a modern operating system, as long as a thread acquires a time slice, it cannot be re-executed within the time slice), causing the thread to switch frequently and reduce the efficiency of the system execution.
In addition to causing the scheduling granularity to become smaller, another problem with lock convoy is that the scheduler's time allocation is unfair. Suppose another thread x is also running on equal priority, but not participating in lock contention. Thus, in each round of lock competition, thread X has the opportunity to be allocated a complete time slice, so that these competing threads get 1/3 time slices in one round, while non-competing threads can get complete time slices. Of course, you can say that this unfairness is caused by their lock-in, but in terms of time distribution, it is unfair to participate in the competition and not compete with the thread. Illustrates the difference in execution time between threads X and A, B, and C.
As can be seen from the above description, the existence condition of lock convoy is that the competing thread acquires the lock frequently, the lock is released by one thread and then its ownership falls into the hand of the other thread. In the operating system, threads of the same priority are dispatched and executed in the order of the FIFO, and the threads competing for the same lock are successively successfully acquired to the lock in order of FIFO. These conditions can be met in modern operating systems, including Windows.
A reasonable solution to the lock convoy requires that each thread acquire a lock first attempt (try), if the attempt is still unsuccessful, and then block.
priority inversion (precedence reversal)A high-priority task waits for a low-priority task to release resources, while a low-priority task is waiting for a medium-priority task, called a priority reversal. (Note: There is no shared resource between the high-priority task and the middle-priority task, but the order of execution is inverted, which is called priority reversal, and a high-priority task is not called a priority reversal because it is blocked waiting for the low-priority task to release resources.) ) image meaning: When the low priority is interrupted at run time by high priority, but the resource used by high priority is blocked by the low priority, then the low priority task is interrupted, the low priority task is blocked, and the high priority and priority level do not have any resource sharing, but the priority reversal occurs.
Two classic ways to prevent reversals:
1. (Priority inheritance, precedence inheritance): inherits the highest priority of the existing blocked task as its priority, the task exits the critical section and restores the initial priority. In the above example, when a high-priority task waits for a low-priority task to release resources, the priority of the low-priority task is raised to the priority of the high-priority task, and the priority is reverted to its initial priority when it exits the critical section.
2. (priority ceilings, set precedence): Setting the priority limit is the priority of the task that requests (occupies) a resource to the highest priority task in all tasks that might access the resource. As in the above example, if the low priority thread is raised to the highest priority, then the intermediate priority program will not be dispatched and priority reversal will not occur. [1] Reference: http://blog.csdn.net/panaimin/article/details/5981766
Thread scheduling issues: Lock Convoy (lock Guard) and priority inversion (precedence reversal)