Last time we talked about multithreading management, today we look at the multiple-threaded lock.
1. Different Locking methods
class Lock
A static and synchronized lock is added to the method in the code, or a code snippet for synchronized (Xxx.class)
Object Lock
A synchronized lock is added to the method in the code, or the code snippet for synchronized (this)
Private Lock
Declare a private property inside a class, such as Private Object lock, in a code segment that needs to be locked synchronized (lock)
Note: class locks and object locks do not compete, and the locking method does not affect each other.
Private locks and object locks do not compete, and the locking method does not affect each other. 2, the time to lock
Pessimistic lock
A section of execution logic plus a pessimistic lock, when different threads execute concurrently, only one thread executes, and the other threads wait at the entrance until the lock is released.
optimistic Lock
A section of execution logic plus optimistic lock, when different threads execute concurrently, you can enter execution at the same time, when the last update data to check whether the data was modified by other threads (version and execution is the same), do not modify the update, or discard this operation.
Read and write lock
Read-write locks are mutexes for read-write operations. Multiple readers can read at the same time, the writer must be mutually exclusive, the writer takes precedence over the reader 3, whether the order
Fair Lock
It means which thread runs first, so you can get the lock first.
Non-fair lock
Whether or not the pipeline is run first, all are randomly obtained locks. 4, the thread between
Mutual exclusion Lock
No other thread can access the protected resource as long as it is locked
Blocking Locks
It can be said that the thread into the blocking state to wait, when the corresponding signal (wake-up, time), you can enter the ready state of the thread, ready state of all the threads, through competition, into the running state.
lock can be reentrant
Also known as a recursive lock, the inner recursive function still has the code to get the lock, but not affected, after the same thread's outer function acquires the lock.
Spin lock
It allows the thread to execute an empty loop without being suspended while not acquiring a lock, (that is, the so-called spin, which is itself performing an empty loop), and if the thread can acquire a lock after several empty loops, it continues. If the thread is still unable to get the lock, it will be suspended.
Semaphore Lock
Conceptually, the semaphore (semaphore) is a nonnegative integer that is atomized (automically) and is incremented and decremented. If a thread attempts to decrement a semaphore, but the semaphore has a value of 0, the thread will block. Another thread "issue (POST)" This signal (semaphore), the blocked thread is freed after using a semaphore greater than 0. 5, lock change
There are four different states of the lock
Non-lock state, bias lock, lightweight lock and heavyweight lock. As the locks compete, locks can be upgraded from a bias lock to a lightweight lock, and then to an upgraded heavyweight lock (but the upgrade of the lock is one-way, that is, it can only be upgraded from low to high, without a lock downgrade). The default for JDK 1.6 is to turn on bias and lightweight locks,
Lock Expansion
A lighter lock expands to a heavyweight lock that occurs during a lightweight lock-unlock process.
Heavy-weight lock
The synchronized is implemented through an object called a monitor lock inside the objects. But the nature of the monitor lock is based on the underlying operating system's mutex lock. and the operating system to achieve the switch between threads this needs to transition from the user state to the nuclear mentality, the cost is very high, the transition between States need relatively long time, which is why synchronized inefficient reasons. Therefore, this lock, which relies on the operating system mutex lock, is what we call a "heavyweight lock."
Lightweight Locks
Lightweight is relative to traditional locks that are implemented using operating system mutexes. The first thing to emphasize, however, is that lightweight locks are not meant to be used in place of heavyweight locks, which are meant to reduce the performance costs of traditional heavyweight lock usage without multiple threads competing. Before interpreting the implementation of lightweight locks, it is understood that lightweight locks are adapted to situations where a thread alternately executes a synchronized block, and a lightweight lock expands to a heavyweight lock if the same lock is accessed at the same time.
Bias Lock
Biased locks are introduced to minimize unnecessary lightweight lock execution paths in the absence of multiple-threading competition, because the acquisition and release of lightweight locks relies on multiple CAs atomic directives. The preference lock only needs to be dependent on a CAS atomic instruction when replacing ThreadID (because the bias lock must be revoked in case of multithreading competition, the performance loss of the bias lock operation must be less than that of the saved CAs atomic instruction). As mentioned above, lightweight locks improve performance in order for threads to perform synchronized blocks alternately, while biased locks further improve performance when only one thread executes a synchronized block.
No lock State
If the synchronization object lock state is unlocked when the code enters the sync block.
Lock Removal
Refers to the virtual machine Just-in-time compiler at run time, some code requirements for synchronization, but detected that there is no possibility of sharing data competition in the lock to be removed. The main determinant of the lock is based on data support from escape analysis, if you judge that in a piece of code, all the data on the heap will not escape from the other threads to access, it can be treated as data on the stack, that they are thread-private, synchronized lock naturally do not need to
Lock Coarsening
In principle, when we write code, it is always recommended that the scope of the synchronization block be limited to a minimum-only in the actual scope of the shared data, so as to minimize the number of operations that need to be synchronized, and if there is a lock competition, the thread that waits for the lock can get the lock as quickly as possible.
Most of the cases, the above principles are correct, but if a series of consecutive operations on the same object repeatedly lock and unlock, and even lock operation is in the loop body, even if there is no thread competition, frequent mutex synchronization can cause unnecessary performance loss.
If a virtual machine detects that a string of fragmented operations locks up the same object, it expands (expands) the lock-synchronized range to the outside of the entire sequence of operations, so that only one additional lock is required. Summary:
Today we summarize the locks in multiple threads and we will continue to share the learning about locks.