First, the preface
Borrowing Java concurrent programming practice "It's not easy to write the right program, it's even harder to write a normal concurrency program," The multithreaded thread-safety problem is subtle and unexpected compared to sequential execution, because the order of the operations in multiple threads is unpredictable without proper synchronization, This article is a simple introduction to the synchronization strategy in multi-threaded situations.
Ii. What is thread-safety issue
Thread safety issues are problems that cause dirty data or other unpredictable results when multiple threads are reading and writing a state variable at the same time, and there are no synchronization measures. The primary synchronization strategy in Java is to use the Synchronized keyword, which provides an exclusive lock that can be reentrant. Three, what is the shared variable visibility problem
To talk about visibility first, we need to introduce the memory model of Java in the process of sharing variables with multithreading.
The Java memory model stipulates that all variables are stored in the main memory, and when a thread uses a variable, it copies the variables in the main memory into its own workspace or work memory.
As shown in the dual-core CPU system architecture, each core has its own controller and operator, wherein the controller contains a set of registers and operation Controller, the operator performs arithmetic logic operation, and has its own level of cache, and some architectures inside the dual core also has a shared level two cache. Corresponds to the working memory in the Java memory model, where the implementation refers to the L1 or L2 cache or the registers of their own CPUs.
When a thread operates a shared variable, the process is: the thread first copies the variable from the main memory to its own workspace and then processes the variables in the workspace to update the variable values to the main memory
So what happens if threads A and b handle a shared variable at the same time?
First they all go to the top three processes, if thread a copies shared variables to working memory and has updated the data but has not yet updated the main memory (the result may now be stored in the current CPU register or cache), then thread B copies the shared variable to its working memory for processing, After processing, thread A will update its processing results to the main memory or cache, and know that thread B does not handle the result of thread a processing, which means that the value of the variable in thread A is not visible to thread B, which is the problem of the invisibility of the shared variable.
The reason that the memory that makes up the shared variable is not visible is because the three-step process is not atomic, and the following is known to solve the problem with proper synchronization.
We know that ArrayList is thread unsafe, because his methods of reading and writing do not have synchronization strategy, will lead to dirty data and unpredictable results, below we will explain how to solve the one by one.
This is a thread-insecure public
class arraylist<e>
{public
E get (int index) {
Rangecheck (index);
Return Elementdata (index);
}
Public E Set (int index, E element) {
Rangecheck (index);
E OldValue = elementdata (index);
Elementdata[index] = element;
Return OldValue
}
}
iv. Atomic Nature
4.1 Introduction
Suppose thread A performs operation AO and thread B executes the operation Bo, so from a see, when the B thread executes the BO operation, then all the BO operations are executed, or all do not execute, we call AO and Bo operation Mutually atomic operation, in the design counter is generally read the current value first, then +1, and then update the variable, is the process of reading-modifying-writing, which must be an atomic operation.
public class Threadnotsafecount {
private Long value;
Public Long GetCount () {return
value;
}
public Void Inc () {
++value
}
}
As the code above is thread unsafe, there is no guarantee that the ++value is an atomic operation. The first method is to use synchronized to synchronize the following:
public class Threadsafecount {
private Long value;
Public synchronized Long GetCount () {return
value;
}
Public synchronized Void Inc () {
++value
}
}
Note that it is not easy to synchronize using volatile modifier value because the value of the variable depends on the current value
Using synchronized does allow for thread safety, that is, visibility and synchronization, but synchronized is an exclusive lock, and threads that do not acquire an internal lock will be blocked, so there is no just the implementation of that. The answer is yes. 4.2 Atomic Variable class
Atomic variable analogy lock is more lightweight, for example, Atomiclong represents a long value, and provides the Get,set method, Get,set method semantics and volatile are the same, because Atomiclong interior is the use of the volatile modification of the real long variable. In addition to the atomicity of the self reduction operation, so the counter can be changed to:
public class Threadsafecount {
private atomiclong value = new Atomiclong (0L);
Public Long GetCount () {return
value.get ();
}
public Void Inc () {
value.incrementandget ();
}
}
The advantage of using synchronized, then, is that the atomic class does not cause the thread to suspend and reschedule because he is using a non-blocking algorithm for CAs.
The common atomic class variables are: Atomiclong,atomicinteger,atomicboolean,atomicreference. Five Introduction to CAS
CAS is compareandset, that is, compare and set, CAS has three operands: memory location, old expected value, new value, operation meaning is to replace old value with new value when the variable value of memory location is old expected value. The popular saying is to see the memory location of the variable value is not I gave the old expected value, if it is to use the new value I gave to replace him, if not returned to my old values. This is an atomic directive provided by the processor. The atomiclong that is described above is implemented using this method:
Public final long Incrementandget () {for
(;;) {
long = get (); (1)
long next = current + 1; (2)
if (Compareandset (Next)) (3) return
next;< c6/>}
} public
Final Boolean compareandset (long expect, long update) {return
Unsafe.compareandswaplong ( This, Valueoffset, expect, update);
If the current value is 1, then thread A and check b at the same time to (3) each of the respective next is 2,current=1, if thread a first executes 3, then this is atomic operation, will update the schedule value of 2 and return 1, If True, Incrementandget returns 2. This time thread B executes 3 because the current=1 and the current variable is 2, so if it is false, the loop continues, if no other thread is going to add the variable, This time, thread B updates the variable to 3 and exits.
There is an infinite loop using CAs for polling checks, which, while wasting CPU resources, avoids thread context switching and scheduling compared to locks. Six, what is the lock can be reentrant
When a thread is blocked from acquiring a lock occupied by another thread, will the thread be blocked when it acquires the lock it has acquired again? If you do not need to block then we say that the lock is a reentrant lock, which means that as long as the thread acquires the lock, Then you can enter the code locked by the lock unlimited number of times.
First look at an example if the lock is not reentrant, see what happens.
public class hello{public
Synchronized void Helloa () {
System.out.println ("Hello");
}
Public Synchronized void Hellob () {
System.out.println ("Hello B");
Helloa ();
}
If the above code gets the built-in lock before calling the Hellob function, then prints the output, and then calls the Helloa method, it gets the built-in lock before the call, and if the built-in lock is not reentrant then the call causes a deadlock because the thread holds and waits for the lock.
The internal lock is actually a reentrant lock, for example, the method of synchronized keyword management, the principle of reentrant locking is to maintain a thread mark inside the lock, indicating that the lock is currently occupied by that thread, and then associating a counter with a starting counter value of 0, indicating that the lock is not occupied by any thread, When a thread acquires the lock, the counter becomes 1, the other thread finds that the lock owner is not himself so blocked when acquiring the lock, but when the thread acquiring the lock acquires the lock again, the lock owner is able to put the counter value of +1, and when the lock is released the counter will be 1, and when the counter is 0, The thread in the lock is reset to null, and the blocked thread gets awakened to get the lock. Seven, synchronized key word 7.1 Synchronized Introduction
The synchronized block is a mandatory built-in lock provided by Java, and each Java object can implicitly act as a lock for synchronization, which is called an internal lock or a monitor lock, and the execution code automatically acquires an internal lock before entering the synchronized code block. This is when other threads are blocked from accessing the sync code block. The thread that gets the internal lock releases the internal lock after the normal exit of the sync code block or the exception is thrown, at which point the blocked thread acquires the internal lock and enters the synchronized code block. 7.2 Synchronized Synchronization instance
An internal lock is a mutex, in particular when only one thread can get the lock, and when one thread gets the lock and is not released, the other thread can only wait.
For the ArrayList mentioned above, you can use synchronized for synchronization to handle visibility issues.
Synchronizing a method with synchronized public
class arraylist<e>
{public
synchronized E get (int index) {
Rangecheck (index);
Return Elementdata (index);
}
Public synchronized E Set (int index, E element) {
Rangecheck (index);
E OldValue = elementdata (index);
Elementdata[index] = element;
Return OldValue
}
}
If thread A gets the internal lock into the synchronized block, thread B is ready to go into the synchronization blocks, but since a has not released the lock, B now enters the wait, using synchronization to ensure that thread A gets the value of the variable that is locked until the lock is released and is visible after the B acquires the lock. That is to say, when B begins to perform a code synchronization block of a execution, you can see all the variable values for a operation, specifically when thread B gets the value of B to ensure that the value obtained is 2. When thread a enters the synchronized block to modify the value of the variable, it flushes the value to main memory before exiting the synchronization block, and thread B clears the local memory content before entering the synchronization block, then retrieves the value of the variable from the main memory, thus achieving visibility. Note, however, that a single thread uses the same lock.
Note that the Synchronized keyword causes thread context switching and thread scheduling. viii. introduction of Reentrantreadwritelock
Synchronization can be achieved using synchronized, but the disadvantage is that only one thread can access the shared variable at the same time, but normally, there is no need to sync for multiple read operations when sharing variables, and synchronized cannot implement multiple read threads simultaneously. In most cases, there are more reads than writes, so this greatly reduces concurrency, so there is a reentrantreadwritelock, which enables read-write separation, multiple threads reading at the same time, but a maximum of one write thread exists.
The above method can now be modified to:
public class arraylist<e>
{
private final readwritelock readwritelock = new Reentrantreadwritelock ();
Public E get (int index) {
Lock readlock = Readwritelock.readlock ();
Readlock.lock ();
try {return
list.get (index);
} finally {
readlock.unlock ();
}} Public E Set (int index, E element) {
Lock wirtelock = Readwritelock.writelock ();
Wirtelock.lock ();
try {return
list.set (index, Element);
} finally {
wirtelock.unlock ();
}
}}
If the code acquires the read lock through Readwritelock.readlock () at the Get method, multiple threads can acquire the read lock at the same time, and the set method acquires the write lock by Readwritelock.writelock (), and only one thread can acquire the write lock. Other threads block until the write lock is freed when acquiring a write lock. If a thread has acquired a read lock, then if a thread is to get a write lock to wait until the read lock is freed, if a thread acquires the write lock, then all threads acquiring the read lock need to wait until the write lock is released. So there are multiple readers running at the same time compared to synchronized, so the concurrent volume is increased.
Note requires the user to display the call lock and unlock operations Nine, volatile variables
Java also provides a weak form of synchronization for the avoidance of visibility, using the volatile keyword. This keyword ensures that updates to one variable are visible to other threads. When a variable is declared as volatile, the thread writes without caching the value in the register or elsewhere, retrieving the latest value from the main memory when the thread reads, rather than using the current thread's copy memory variable value.
Volatile, while providing visibility guarantees, cannot use him to build composite atomic operations, which means that when a variable relies on other variables or updates the value of the variable, the new value does not apply when it is dependent on the current old value. Similar to the synchronized is the figure
As figure thread A modifies the value of volatile variable B, thread B then reads the change measure, and the value of the variable that is visible before all a threads writes the variable b value is visible to thread B after B reads the volatile variable B, and the values of the variable a,b for a operation are visible in the graph. Volatile's memory semantics and synchronized are similar, specifically when the thread writes the volatile variable value equivalent to the thread exit synchronized synchronization block (which synchronizes the variable values written to local memory to the main memory). Reading the volatile variable value is equivalent to entering the sync block (the local memory variable value is emptied first, and the latest value is fetched from main memory).
The following integer is also thread unsafe because there is no synchronization action
public class Threadnotsafeinteger {
private int value;
public int Get () {return
value;
}
public void set (int value) {
this.value = value;
}
}
Use the Synchronized keyword for synchronization as follows:
public class Threadsafeinteger {
private int value;
public synchronized int get () {return
value;
}
Public synchronized void set (int value) {
this.value = value;
}
}
Equivalent to using volatile for synchronization as follows:
public class Threadsafeinteger {
private volatile int value;
public int Get () {return
value;
}
public void set (int value) {
this.value = value;
}
}
Using synchronized here is equivalent to using volatile, but not all cases are equivalent, and it is generally possible to use volatile to write variable values without relying on the current value of the variable, or to ensure that only one thread modifies the value of the variable, if all of the following conditions are met. The value of the variable being written is not dependent on the participation of other variables. When reading variable values, you cannot shackle them for other reasons.
Additional locks can guarantee both visibility and atomicity, while volatile only guarantees the visibility of variable values.
Note that the volatile keyword does not cause thread context switching and thread scheduling. In addition, volatile is also used to solve the reordering problem, which will be mentioned later. 10. Optimistic lock and pessimistic lock 10.1 Pessimistic Lock
Pessimistic lock, refers to the data by the external modification conservative Attitude (pessimistic), in the whole process of data processing, will be in a locked state. The implementation of pessimistic locks often relies on the locking mechanism provided by the database. The implementation in the database is to give the record a lock prior to operation of the data record, if the lock fails, the data is being modified by another thread, waiting for or throwing an exception. If the lock succeeds, get the record, modify it, and then release the exclusive lock after the transaction commits.
One example: SELECT * FROM table where ... for update;
Pessimistic lock is the first lock and then access policy, processing lock will make the database incur additional overhead, there is also the opportunity to increase the deadlock, in addition to multiple threads read-only case will not produce inconsistent rows of data problems, no need to use locks, will only increase the system load, reduce concurrency, because when a transaction locks the record, Other transactions that read the record can only wait. 10.2 Optimistic Lock
Optimistic locks are relative to pessimistic locks, it believes that the data generally do not create conflicts, so the access to the record will not be exclusive locks, but when the data is submitted to update, the data will be formally the conflict or not to detect, specifically according to the number of rows returned by update let the user decide how to do. Optimistic locks do not use the lock mechanism provided by the database, typically adding a version field to a table or using a business state.
Specific can refer to: https://www.atatech.org/articles/79240
Optimistic locks are not locked until committed, so no locks or deadlocks are generated. 11. Exclusive lock and shared lock
Depending on whether a lock can be held by a single thread or multiple threads, the lock is divided into exclusive locks and shared locks. Exclusive locks guarantee that only one thread can read and write permissions at any time, and Reentrantlock is a mutex that is exclusively implemented. Shared locks can have multiple read threads at the same time, but there can be only one write thread, read and write are mutually exclusive, such as Readwritelock read-write locks, which allow a resource to be read simultaneously by multiple threads, or written by one thread, but not both.
An exclusive lock is a pessimistic lock that adds a mutex to each access resource, which limits concurrency because the read operation does not affect data consistency, and exclusive locks allow only one thread to read the data, and other threads must wait for the current thread to release the lock before they can read.
A shared lock is an optimistic lock that relaxes the lock condition and allows multiple threads to read at the same time. 12. Fair lock and non-fair lock
According to the thread acquires the lock the preemption mechanism lock may divide into the fair lock and the unfair lock, the fair lock indicates that the thread obtains the lock the order is according to the line Cheng time how many decides, namely is the earliest lock thread will obtain the lock first, namely first comes first obtains the FIFO order. Instead of a fair lock, you run the break-in, which is not necessarily first.
Reentrantlock provides a fair and unfair lock implementation:
Fair lock Reentrantlock Pairlock = new Reentrantlock (true);
Non-fair lock reentrantlock Pairlock = new Reentrantlock (false);
If the constructor does not pass the argument, the default is a fair lock.
Use unfair locks as much as possible without the need for fairness, because a fair lock can incur performance overhead.
Assuming thread a already holds the lock, at this point, thread B requests that the lock will be suspended, and that if thread a releases the lock, if the current wire C also needs to acquire the lock, if a method of unfair locking is used, the lock may be acquired by one of the threads scheduling policy threads B and C, and no other interference is required. If you use a fair lock, you need to suspend C and let B get the current lock. 13, Abstractqueuedsynchronizer introduction
Abstractqueuedsynchronizer provides a queue that most developers may never use directly aqs,aqs a variable to hold state information, which can be passed through protected's getstate,setstate, The Compareandsetstate function is invoked. For Reentrantlock, state can be used to indicate the number of times that the thread has been able to be locked out, semaphore is used to indicate the number of currently available signals, and futuertask is used to indicate the status of a task (for example, not yet started, run, done, canceled). 14, Countdownlatch principle 14.1 An example
public class Test {private static final int threadnum = 10; public static void Main (string[] args) {//Create a Countdownlatch instance, admin count is threadnum countdownlatch countdownl
Atch = new Countdownlatch (threadnum);
Create a fixed size thread pool Executorservice executor = Executors.newfixedthreadpool (threadnum);
Add a thread to the thread pool for (int i =0;i<threadnum;++i) {Executor.execute (Countdownlatch, i+1));
System.out.println ("Start waiting for full check-in ...");
try {//wait for all threads to finish executing countdownlatch.await ();
System.out.println ("Check-in completed, start eating");
catch (Interruptedexception e) {e.printstacktrace ();
}finally {executor.shutdown ();
} Static class person implements runnable{private Countdownlatch countdownlatch;
private int index;
Public person (Countdownlatch Cdl,int index) {this.countdownlatch = CDL; This.index = index;
@Override public void Run () {try {thread.sleep (1000);
catch (Interruptedexception e) {//TODO auto-generated catch block E.printstacktrace ();
} System.out.println ("person" + Index + "check-in");
Thread execution complete, counter minus one countdownlatch.countdown ();
}
}
}
As on the code, create a thread pool and countdownlatch instance, each thread through the constructor passed through the Countdownlatch instance, the main thread through the await wait pool inside the threads of the task complete execution, When the child thread completes execution, it calls the countdown counter minus one, and after all the child threads have finished executing, the main thread's await will return. 14.2 Principle
First look at the class diagram:
It is also known that the Countdownlatch is implemented using AQS.
First initializes the AQS state value through the constructor
public Countdownlatch (int count) {
if (count < 0) throw new IllegalArgumentException ("Count < 0");
This.sync = new sync (count);
}
Sync (int count) {
setstate (count);
}
Then look at the await method:
Public final void acquiresharedinterruptibly (int arg)
throws interruptedexception {
//If thread is interrupted throw exception if
( Thread.interrupted ())
throw new Interruptedexception ();
Try to see if the current count is 0, 0 is returned directly, and no person enters the queue waiting for
if (tryacquireshared (ARG) < 0)
doacquiresharedinterruptibly (ARG);
}
protected int tryacquireshared (int acquires) {return
(getState () = 0)? 1:-1;
}
If Tryacquireshared returns-1 then enters doacquiresharedinterruptibly
private void doacquiresharedinterruptibly (int arg) throws Interruptedexception {//Join queue status to shared node
Final node node = addwaiter (node.shared);
Boolean failed = true; try {for (;;)
{final Node P = node.predecessor ();
if (p = = head) {int r = tryacquireshared (ARG);
if (r >= 0) {//If multiple threads call the await to be placed in the queue, return one after the other.
Setheadandpropagate (node, r); P.next = null;
Help GC failed = false;
Return }//shouldparkafterfailedacquire will change the current node state to a signal type, and then call the park method to suspend the thread at the beginning if (Shouldparkafterfailedacquire (p, node) && parkandcheckinterrupt ()) throw
New Interruptedexception (); }} finally {if (failed) CANcelacquire (node);
}
}
When await is invoked, the current thread is blocked until all child threads call the countdown method, and the thread Unpark method activates the thread when the count is 0, and then the thread tryacquireshared back to 1.
Then look at the countdown method:
Delegate to sync public
void Countdown () {
sync.releaseshared (1);
}
Public final Boolean releaseshared (int arg) {
if (tryreleaseshared (ARG)) {
doreleaseshared ();
return true;
}
return false;
}
First look at the tryreleaseshared.
protected Boolean tryreleaseshared (int releases) {
//loop for CAS until the current thread successfully completes the CAS Make Count value (state) minus one update to the states for
(;;) {
int c = getState ();
if (c = = 0) return
false;
int NEXTC = c-1;
if (Compareandsetstate (c, NEXTC)) return
NEXTC = = 0;
}
}
The function always returns false until the current counter is 0 times to return true.
Returns true to invoke Doreleaseshared, which is the main function of calling the Uppark method to activate the thread that invoked the await, as follows:
private void doreleaseshared () {for
(;;) {
Node h = head;
if (h!= null && H!= tail) {
int ws = H.waitstatus;
The node type is SIGNAL, the type is set back through CAs, and then the Unpark invoke await thread
if (ws = = node.signal) {
if!compareandsetwaitstatus (H, node.signal, 0)
continue; Loop to recheck cases
unparksuccessor (h);
}
else if (ws = = 0 &&
!compareandsetwaitstatus (H, 0, node.propagate))
continue; Loop on failed CAS
}