Learn a little bit of programming every day PDF ebook, video tutorial free download:
Http://www.shitanlife.com/code
1. How to use
Synchronized is the most common way in Java to ensure thread safety, and synchronized has three main functions:
- Ensure thread-mutually exclusive access code block, at the same time only one method can enter the critical section
- Ensure that changes to shared variables are visible in time
- Resolve reordering issues effectively
Semantically speaking, there are three main uses of synchronized:
- Modifies the normal method, the lock is the current object instance (this)
- Modifies the static method, which is the current class object (the static method belongs to the class, not the object)
- The code block is decorated, the object in parentheses is locked.
2. Implementation principle 2.1. Monitor lock
The semantic underlying of the synchronized synchronization code block is based on the monitor lock inside the object, which is done using the monitorenter and monitorexit instructions, respectively. In fact, Wait/notify also relies on the monitor object, so it is generally used in the synchronized synchronous method or code block. The monitorenter instruction is inserted into the starting position of the synchronization code block after it is compiled into a bytecode, and the monitorexit instruction is inserted at the end of the method and at the exception after it is compiled into bytecode. The JVM guarantees that each monitorenter must have a corresponding moniorexit.
Monitorenter: Each object has a monitor lock, which is locked when the monitor is occupied by a thread, and when the thread executes the monitorenter instruction, it attempts to take ownership of the monitor, that is, to attempt to acquire the lock on the object. The process is as follows:
- If the monitor has an entry number of 0, the thread enters monitor and then sets the entry number to 1, which is the owner of the Monitor;
- If the thread already occupies monitor and just re-enters, the monitor enters the number +1;
- If another thread has already occupied monitor, the thread is blocked until the monitor has entered a number of 0 and then tries to gain ownership of monitor again
Monitorexit: The thread that executes the monitorexit must be the owner of the monitor that the objectref corresponds to. When the command is executed, the monitor enters the number minus 1, and if the number is 0 after minus 1, the thread exits monitor, no longer the owner of the monitor, and other threads blocked by the monitor can attempt to acquire ownership of the monitor.
2.2. Thread State and State conversion
In the HotSpot JVM, monitor is implemented by Objectmonitor and its main data structures are as follows:
Objectmonitor () { _header = NULL; _count = 0; Number of records _waiters = 0, _recursions = 0; _object = NULL; _owner = NULL; The thread holding monitor _waitset = NULL; A thread that is in the wait state is added to _waitset _waitsetlock = 0; _responsible = NULL; _SUCC = NULL; _cxq = NULL; Freenext = NULL; _entrylist = NULL; A thread waiting to lock the block state will be added to the list _spinfreq = 0; _spinclock = 0; Owneristhread = 0; }
There are two queues in Objectmonitor, _waitset and _entrylist, to hold the list of Objectwaiter objects (each thread that waits for a lock is encapsulated as a Objectwaiter object), and _owner points to a The thread for the Objectmonitor object.
- When multiple threads access a synchronized code at the same time, they first enter _entrylist and wait for the lock to be blocked.
- When the thread gets to the object's monitor, it enters the Owner area and sets the _owner variable in the objectmonitor to the current thread, while the counter in Monitor is Count plus 1.
- If the thread calls the Wait () method, it releases the currently held Monitor,_owner variable back to Null,count minus 1, while the thread enters the _waitset collection waiting to be woken up in the waiting state.
- If the current thread finishes executing, the monitor is freed and the value of the variable is reset so that other threads get into the capture monitor.
The process is as follows:
3. Lock optimization
After JDK1.6, a variety of lock optimization techniques, such as lightweight lock, biased lock, adaptive spin, lock coarsening, lock elimination, etc., are designed to solve the competition problem more efficiently between threads, thus improving the execution efficiency of the program.
Reduce the use of heavyweight locks by introducing lightweight locks and biased locks. The state of the lock is divided into four types: unlocked, biased, lightweight, and heavy-lock. Locks can be upgraded with competition, but cannot be degraded after a lock upgrade, which means that they cannot be downgraded from a lightweight lock state to a biased lock state, or downgraded from a heavyweight lock state to a lightweight lock state.
Lock-free status → bias lock status → lightweight lock → heavyweight lock
3.1. Object Header
To understand the mechanism of lightweight and biased locking, start by understanding the object header. The object header is divided into two parts:
1, Mark Word: Store The object's own run-time data, such as: Hash CODE,GC generation Age, lock information. This data is 32bit and 64bit, respectively, in 32-bit and 64-bit JVMs. With space efficiency in mind, Mark Word is designed as a non-stationary data structure to store as much information as possible in a very small space, as shown in the 32bit mark Word:
2. Stores a pointer to the method area object type data, and if it is an array object, the extra length of the array is stored
3.2. Heavy-Lock
Monitor lock is essentially dependent on the operating system's mutex lock mutex to achieve, we generally call it 重量级锁
. Because OS-to-thread switching requires a transition from a user state to a nuclear mindset, the conversion process is costly and time-consuming, so the synchronized efficiency will be relatively low.
The lock flag bit for the heavyweight lock is ' 10 ' and the pointer is to the start address of the monitor object, as described above for the implementation of Monitor.
3.3. Lightweight lock
轻量级锁
is a weight-lock implementation relative to the OS-based mutex, and it is intended to reduce the performance cost of traditional heavyweight locks using OS mutexes without multi-threading competition.
The experience of lightweight lock lift performance is based on: 对于绝大部分锁,在整个同步周期内都是不存在竞争的
. Without competition, lightweight locks can improve efficiency by using CAS operations to avoid the overhead of mutexes.
Lock process for lightweight locks:
1, when the thread enters the synchronization code block, the JVM will first establish a space named lock record in the stack frame of the current thread to store the lock object's current copy of Mark Word (officially called displaced Mark Word), owner The pointer points to the object's Mark Word. The state of the stack and the object header at this time:
2. The JVM uses CAS operations to attempt to update Mark Word in the object header to a pointer to the Lock Record. If the update succeeds, then step 3 is performed, and the update fails, then step 4
3. If the update succeeds, the thread has the lock on the object, and the lock status of the object's Mark Word is a lightweight lock (the flag bit is converted to ' 00 '). The state of the thread stack and the object header at this point:
4. If the update fails, the JVM first checks whether the object's Mark Word points to the stack frame of the current thread
- If it is, it means that the current thread already has the lock on the object, so you can proceed directly into the synchronization code block
- If not, it means that the lock object has been preempted by another thread, and the current thread attempts to spin a certain number of times to acquire the lock. If the spin-off CAS operation is still unsuccessful,
轻量级锁
upgrade to 重量级锁
(the lock's flag bit to ' ten '), Mark Word stores a pointer to a heavyweight lock, and the thread that waits for the lock goes into a blocking state
the unlocking process for lightweight locks:
1. Replace the object with the data in displaced mark word that is copied from the thread using CAS operations the current Mark Word
2, if the replacement is successful, the entire synchronization process is completed
3. If the substitution fails, indicating that another thread has attempted to acquire the lock, it will wake the suspended thread while releasing the lock
3.4. Bias Lock
轻量级锁
Is the use of CAS operations to eliminate mutexes in the absence of multi-threading competition, 偏向锁
eliminating this synchronization in the absence of multi-threaded contention.
The experience in favor of lock lift performance is based on: 对于绝大部分锁,在整个同步周期内不仅不存在竞争,而且总由同一线程多次获得
. A biased lock will favor the first thread that obtains it, and if the lock is not fetched by another thread during the next execution, the thread holding the biased lock does not need to be synchronized. This makes it less expensive for threads to acquire locks.
The acquisition process of the bias Lock:
1, the thread executes the synchronization block, when the lock object is first acquired, the JVM sets the lock state in the lock object's Mark word to the biased lock (the lock flag bit is ' 01 ', the biased flag bit is ' 1 '), and the thread that obtains the lock is logged in Mark Word through the CAS operation ThreadID
2. If the CAS operation is successful. Each time a thread that holds a biased lock enters and exits the sync block, just test to see if Mark Word stores the ThreadID of the current thread. If it is, it means that the thread has acquired a lock without the additional cost of CAS operation plus lock and unlock
3, if not, through CAS operation competition lock, the competition is successful, then Mark Word's ThreadID replaced by the current thread's ThreadID
The release process of the bias Lock:
1. When a thread is already holding a biased lock and another thread attempts to compete for a bias lock, the CAS replace ThreadID operation fails, and the biased lock is started. The revocation of a biased lock requires waiting for the thread that originally holds the biased lock to reach the global security point (No bytecode is executing at that point in time), pauses the thread, and checks its status
2. If the thread that originally held the biased lock is not active or has exited the synchronization code block, the thread releases the lock. Set the object header to a lock-free State (the lock flag bit is ' 01 ' and the bias flag is ' 0 ')
3. If the thread holding a biased lock does not exit the synchronization code block, it is upgraded to a lightweight lock (the lock flag bit is ' 00 ')
3.5. Summary
A state transition between a biased lock, a lightweight lock, and a heavyweight lock (summarizes what the lock gets and releases described above):
Here are the comparison of these types of locks:
3.6. Other optimizations
1. Adaptive spin
自旋锁
: When the mutex is synchronized, both the suspend and resume threads need to switch to the kernel state to complete, which brings a lot of pressure to performance concurrency. At the same time, in many applications, the lock state of shared data will only last for a short period of time, and it is not worthwhile to suspend and resume threads for this short period of time. So if there are multiple threads executing concurrently at the same time, you can let the thread behind the request lock wait a moment for a spin (the CPU busy loop executes the null instruction) to see if the thread holding the lock will release the lock quickly, so that the CPU's execution time is not abandoned.
适应性自旋
: When a thread performs a CAS operation failure during a lightweight lock acquisition process, a spin is required to obtain a heavyweight lock. If the lock is taken for a short time, the spin-wait effect is better, and if the lock takes a long time, the spinning thread wastes CPU resources. The simplest way to solve this problem is to specify the number of spins, and if the lock has not been acquired (for example, 10 times) within a limited number of times, the thread is traditionally suspended into a blocking state. JDK1.6 introduced the self-adaptive spin method, if on the same lock object, the thread spin waits for the lock to be successfully acquired, and the threads holding the lock are running, then the JVM will assume that this spin is also possible to successfully acquire the lock again, allowing the spin to wait for a relatively longer time (for example, 100 times). On the other hand, if a lock spin is rarely successful, the spin process is omitted later to avoid wasting the CPU.
2, Lock elimination
Lock elimination is when the compiler runs, eliminating some locks that are detected as impossible to compete for shared data. If you judge a piece of code that the data on the heap does not escape and be accessed by other threads, you can treat them as data on the stack, thinking they are thread-private and do not need to be locked.
public string concatstring (string s1, String s2, string s3) { StringBuffer sb = new StringBuffer (); Sb.append ("a"); Sb.append ("B"); Sb.append ("C"); return sb.tostring ();}
There is a synchronous code block in the Stringbuffer.append () method, the lock is a SB object, but all references to SB do not escape to the outside of the Concatstring () method, and other threads cannot access it. So there is a lock, but after the instant compilation, it will be safely eliminated, ignoring the synchronization and directly executed.
3, the locking of coarse
Lock Coarsening is when the JVM detects that a bunch of fragmented operations are locking the same object, and the lock synchronization range is bolded to the outside of the entire sequence of operations. Take the above concatstring () method as an example, the internal stringbuffer.append () will be locked each time, will lock the coarsening, in the first append () before the last append () only need to add a lock on it.
Learn a little bit of programming every day PDF ebook, video tutorial free download:
Http://www.shitanlife.com/code
Java Concurrency Programming: synchronized and lock optimization