Thread Safety and lock optimization

Source: Internet
Author: User
Tags cas instance method mutex semaphore thread class volatile

    • Thread Safety and lock optimization
      • Thread Safety
        • Thread safety in the Java language
          • Not variable
          • Relative Thread safety
          • Absolute thread Safety
          • Thread compatible
          • Thread antagonism
        • How to implement thread safety
          • Mutex synchronization
          • Non-blocking synchronization
          • No synchronization scenario
      • Lock optimization
        • Spin and self-adapting spin
        • Lock removal
        • Lock coarsening
        • Lightweight lock
        • Biased lock

thread safety and lock optimization Thread Safety

Brian Goetz, author of Java Concurrency in practice, has a fairly good definition of "thread safety": "When multiple threads access the same object, no additional synchronization is required without regard to the scheduling and alternation of these threads in the runtime environment. , or call the method to do any other coordinated operation, call this object's behavior can get the correct result, then this object is thread-safe "

thread Safety In the Java language

The thread safety discussed here is based on the premise that shared data access exists between multiple threads, because if a piece of code does not share data with other threads at all, then from a thread-safe point of view, whether the program is executed serially or multithreaded is completely indistinguishable from him.

not variable

The immutable (immutable) object must be thread-safe in the Java language, especially after JDK1.5, which is the Java memory model after the modified language. As long as an immutable object is built correctly (without the case where this reference escapes), the visible state of its outside will never change. The security that "immutable" brings is the simplest and purest.
In the Java language, if the shared data is a basic data type, it is guaranteed to be immutable as long as it is decorated with the final keyword when defined. If the shared data is an object, it is necessary to ensure that the object's behavior does not have any effect on its state, or that the method of the object has no effect on the data within it. There are many ways to ensure that object behavior does not affect your state, the simplest of which is to declare the variables with states in the object as final.

Relative Thread safety

Relative thread safety is the usual sense of thread safety outside, it needs to ensure that the individual operation of this object is thread-safe, we do not need to make additional safeguards when invoking, but for a number of successive calls in a particular order, it is necessary to use an additional synchronization method at the caller to ensure the correctness of the call. As in the following example:

 PackageCom.overridere.twelve;ImportJava.util.Vector; Public  class vectortest {    Private Staticvector<integer> vector =NewVector<integer> (); Public Static void Main(string[] args) { while(true) { for(inti =0; I <Ten;            i++) {vector.add (i); } Thread remove1 =NewThread (NewRunnable () {@Override                 Public void Run() { for(inti =0; I < vector.size ();                    i++) {vector.remove (i);            }                }            }); Thread remove2 =NewThread (NewRunnable () {@Override                 Public void Run() { for(inti =0; I < vector.size ();                    i++) {vector.remove (i);            }                }            });            Remove1.start ();        Remove2.start (); }    }}

The results of the operation are as follows:

"Thread-400"7    at java.util.Vector.remove(Vector.java:831)    at com.overridere.twelve.VectorTest$1.run(VectorTest.java:17)    at java.lang.Thread.run(Thread.java:745)

Although the Remove () and size () methods of the vectors used here are synchronous, it is still unsafe in a multithreaded environment if you do not do extra synchronization on the method call side. The above example causes an error because the thread remove2 has the possibility to change the deleted element, causing the sequence number I to be no longer available, since the time before calling the Vector.remove () method after the remove1 enters the loop. The arrayindexoutofboundsexception is thrown when the thread remove1 runs to the Vector.remove () method. If you want to make sure that this code executes correctly, you can change the two for loop into a synchronous code block, as follows:

Thread remove1 =NewThread (NewRunnable () {@Override     Public void Run() {synchronized(vector) { for(inti =0; I < vector.size ();            i++) {vector.remove (i); }        }    }}); Thread remove2 =NewThread (NewRunnable () {@Override     Public void Run() {synchronized(vector) { for(inti =0; I < vector.size ();            i++) {vector.remove (i); }        }    }});

In the Java language, most thread-safe classes fall into this type, such as the collection of vectors, HashTable, collections, Synchronizedcollection () method wrappers, and so on.

Absolute Thread Safety

The above code does not need to become a synchronous block and can run correctly

Thread compatible

Thread compatibility refers to the fact that the object itself is not thread-safe, but it is possible to ensure that the object is safe to use in a concurrent environment by using the synchronization method correctly on the caller, which is what we normally call a class that is not thread-safe. Most of the classes in the Java API are thread-compatible, such as the collection classes ArrayList and HashMap, which correspond to the preceding vectors and Hashtable.

Thread Antagonism

Thread antagonism refers to code that cannot be used concurrently in a multithreaded environment, regardless of whether the caller has synchronized measures.
A thread-opposite example is the Suspeng () and the resume () method of the thread class, if there are two threads holding an object at the same time, one attempting to break the thread, the other trying to recover the threads, and if the concurrency occurs, the target thread is at risk of deadlock regardless of whether the call is synchronized. If the thread that suspend () breaks is the one that is about to execute resume (), then it must be a deadlock. It is for this reason that the suspend () and the Resume () methods and the JDK declarations are deprecated. Common threading Opposites are System.setin (), System.setout (), and System.runfinalizersonexit ().

How to implement thread safety Mutex Synchronization

Mutex synchronization (Mutual Exclusion & synchronization) is a common method of concurrency correctness assurance. Synchronization is when multiple threads concurrently access shared data, which ensures that shared data is used only at the same time (or some, when using semaphores) by threads. While mutual exclusion is a means of realizing synchronization, the critical section (Critical), mutex (mutex) and semaphore (Semaphore) are the main mutually exclusive implementations. Mutual exclusion is the result, synchronization is the fruit, mutual exclusion is the method, synchronization is the purpose.
synchronized
in Java, the most basic mutex synchronization means is the Synchronized keyword, The Synchronized keyword is compiled to form Monitorenter and monitorexit two bytecode instructions before and after the synchronization block, and the two bytecode directives require a parameter of reference type to indicate the object to lock and unlock. If the synchronized in a Java program explicitly specifies an object parameter, it is a positive reference, or if it is not explicitly specified, whether the instance method or the static method is modified according to synchronized. To take the corresponding number of instances or class objects as lock objects.
According to the requirements of the Java Virtual Machine specification, when executing the monitorenter instruction, first try to get the lock of the object. If the object is not locked, or when the front thread already has the lock of that object, the lock counter is added 1, corresponding, the lock counter will be reduced by 1 when the monitorexit instruction is executed, and the lock will be released when the counter is 0 o'clock. If an object lock fails, the current thread will block the wait until the object lock is freed by another thread.
Synchronized synchronization is fast to the same thread is reentrant, do not appear to lock themselves to the problem. The
synchronization block blocks the entry of the other threads after the entered thread has finished executing.
Blocking and waking a thread requires an operating system to help, from a user state to a kernel mindset, so the state transition takes a lot of processor time, so synchronized is a heavyweight lock in the Java language.

Reentrantlock
In addition to synchronized, We can also use the re-entry lock (Reentrantlock) in the Java.util.concurrent package for synchronization, which is similar on the basic usage, but the code is written with a distinction of merit, an API-level mutex (lock () and unlock () Method in conjunction with the try/finally statement), and another representation of the mutex at the native syntax level. However, compared to Synchronized,reentrantlock added some advanced features, there are mainly the following three items: Waiting can be interrupted, can achieve a fair lock and lock can bind multiple conditions.

    • Waiting for interruptible means that when the thread holding the lock does not release the lock for a long time, the waiting thread can choose to discard the wait and handle other things instead.
    • A fair lock is when multiple threads are waiting for the same lock, and the lock must be obtained sequentially in the order in which the locks are requested, while the non-fair lock does not guarantee that any thread that waits for a lock is given a chance to acquire a lock when the lock is released. The lock in synchronized is not fair, Reentrantlock is not fair by default, but a fair lock can be used by constructors with Boolean values.
    • Binding multiple conditions means that a Reentrantlock object can bind multiple condition objects at the same time.
non-blocking synchronization

The main problem with mutex synchronization is the performance problems caused by thread blocking and wake-up, so this synchronization also becomes a blocking synchronization (Blocking synchronization). In the way of dealing with the problem, the mutex synchronization is a pessimistic concurrency policy, and it is always assumed that as long as the correct synchronization is not done, there will be problems. With the development of the hardware instruction set, we have another option: optimistic concurrency strategy based on conflict detection, in layman's words, first operation, if no other thread contention shared data, then the operation succeeds; if contention for shared data conflicts, Then take other compensation measures (the most common compensation measure is to continually retry until successful), many implementations of this optimistic concurrency policy do not need to suspend the thread, so this synchronization operation is called non-blocking synchronization (non-blocking synchronization)
The use of the optimistic strategy of war requires "the development of hardware instruction set", because we need to operate and conflict detection of the two steps are atomic, if the use of mutual exclusion synchronization to ensure that the loss of meaning, so only rely on hardware to do this thing, Hardware guarantees that a semantically seemingly multiple-action behavior can be done with only a single processor instruction, which is commonly used in these commands:

    • Test and Setup (Test-and-set)
    • Get and Add (fetch-and-increment)
    • Exchange (SWWAP)
    • Compare and replace (Compare-and-swap, referred to as CAs)
    • Load link/Condition Store (load-linked/store-conditional, abbreviation LL/SC)

The CAS directive requires three operands, namely the memory location (which can be easily understood in Java as the memory address of the variable, denoted by V), the old expected value (denoted by a), and the new value (denoted by B). When the CAS instruction executes, the processor will update the value of V with the new value B if and only if v conforms to the old expected value, otherwise it will not perform the update, but the old value of V will be returned regardless of whether the value of V is updated, and the above process is an atomic operation.
Take a look at one of the unresolved questions to see how to use CAs to avoid blocking synchronization with the following code:

 PackageCom.overridere.twelve;ImportJava.util.concurrent.atomic.AtomicInteger;/** * volatile variable self-increment operation Test */ Public  class volatiletest {     Public Static volatileAtomicinteger race =NewAtomicinteger (0); Public Static void Increase() {race.incrementandget (); }Private Static Final intThreads_count = -; Public Static void Main(string[] args) {thread[] threads =NewThread[threads_count]; for(inti =0; i < Threads_count; i++) {Threads[i] =NewThread (NewRunnable () {@Override                 Public void Run() { for(inti =0; I <10000;                    i++) {increase ();            }                }            });        Threads[i].start (); }//wait for all cumulative threads to end         while(Thread.activecount () >1) Thread.yield ();    System.out.println (race); }}

Operation Result: 200000
Enter the Incrementandget () method source code to see:

/** * Atomically increments by one the current value. * * @return the updated value */publicfinalintincrementAndGet() {    return unsafe.getAndAddInt(this11;}

Continue to view Getandaddint source code:

/** * atomically adds the given value to the current value of a field * or array element within the given object <c Ode>o</code> * at the given <code>offset</code>. * * @param o object/array to update the field/element in * @param offset field/element offset * @par Am Delta The value to add * @return The previous value * @since 1.8 */ Public Final int Getandaddint(Object O,LongOffsetintDelta) {intV    do {v = getintvolatile (o, offset); } while(!compareandswapint (o, offset, V, v + delta));returnV;}

Where both Getintvolatile and Compareandswapint are local methods
As the above code shows, the Getandaddint method uses a loop that, if the CAS operation is unsuccessful, loops through the values until the CAS operation succeeds.
Although CAs looks good, there is a logical flaw in CAS: If a variable v is a value for the first time, and if it is still a value when it is ready to be copied, can we say that its value has not been changed by another thread? If during this time its value has been changed to B and then changed to a, then the CAS operation will mistakenly assume that it has not been changed. This vulnerability is called the "ABA" issue of CAS operations. Java.util.concurrent package in order to solve this problem provides a tagged atomic reference class "Atomicstampedreference", which can be controlled by the version of the variable to ensure the correctness of the CAs. But for now, this kind of "chicken", most of the time, the ABA problem will not affect the correctness of program concurrency, if you need to solve the ABA problem, instead of the traditional mutex synchronization may be more efficient than the atomic class.

No synchronization scenario

Synchronization is only a means of ensuring the correctness of shared data contention, and if a method does not involve sharing data, it naturally does not require any synchronization to ensure correctness, so some code is inherently thread-safe.
reentrant codes (reentrant code):
This code, also known as the spring code, can be interrupted at any point in the execution of the code, instead of executing another piece of code, and the original program will not have any errors after control returns. All reentrant code is thread-safe, but not all thread-safe code can be re-entered.
Reentrant code has these characteristics: do not rely on the data stored on the heap and the common system resources, the amount of state used by the parameters are passed in, do not call non-reentrant methods.
thread native Storage (thread local Storage): If the data required for a piece of code must be shared with other code, then see if the code for the shared data is guaranteed to execute on the same thread? If guaranteed, we can limit the visible scope of shared data to the same thread, so that there is no need for synchronization to ensure that there is no data contention between threads.

Lock Optimization Spin and self-adapting spin

Referring to the mutual exclusion synchronization, it is mentioned that the maximum performance impact of mutex synchronization is the implementation of blocking, the operation of suspending thread and recovery thread need to go to the kernel state to complete, these operations put a lot of pressure on the concurrency performance of the system. At the same time, the locked state of the shared data will only last for a short period of time, and it is not worthwhile to suspend and resume the thread for this period of time. If there is more than one processor on the physical machine, allowing two or more threads to execute concurrently simultaneously, we can let the thread that asks for the lock "Wait a moment", but not abandon the processor's execution time and see if the thread holding the lock will release the lock soon. In order for the thread to wait, we just have to let the thread perform a busy loop (spin), the technology is called spin lock.
The spin wait itself avoids the overhead of thread switching, but it consumes processor time, so if the lock takes a short time, the spin-wait is very good, whereas if the lock takes a long time, the spin thread consumes the processor resources in vain. Therefore, the time of the spin wait must have a certain limit, if the spin exceeds the limit number of times still did not successfully acquire the lock, you should use the traditional way to hang up the thread. The spin count is 10 times by default and can be changed using the parameter-xx:preblockspin.

Adaptive JDK1.6 is introduced in the paper. Adaptive means that the spin time is not fixed, but is determined by the previous spin time in the same lock and the state of the lock owner. If, on the same lock object, the spin wait has just been successfully acquired, and the thread holding the lock is running, the name virtual machine will assume that the spin is likely to succeed again, and that it will allow the spin wait to persist for a relatively long time. If the spin is rarely successful for a lock, then the spin process may be omitted in the future to acquire the lock.

Lock Removal

Lock elimination refers to the virtual machine instant compiler at run time, some code on the requirements of synchronization, but it is detected that there is no possibility of sharing the competition of the data to eliminate the lock. Of course, most of this is not the programmer's own knowledge of the lack of shared data competition also requires synchronization. As in the following code:

publicconcatString(String s1, String s2, String s3) {    return s1 + s2 + s3;}

Before JDK1.5, the addition of the string type data is converted into a continuous append () operation of the StringBuffer object, and the above code is converted to the following code before JDK1.5 becomes stringbuilder,jdk1.5

publicconcatString(String s1, String s2, String s3) {    new StringBuffer ();    sb.append(s1);    sb.append(s2);    sb.append(s3);    return sb.toString();}

Each stringbuffer.append () method has a synchronization block, and the lock is a SB object. Virtual machine Detection Object SB will find that its dynamic scope is limited to the inside of the Concatstring () method. That is, all references to SB do not "escape" to the Concatstring () method, other threads cannot access it, so, although there is a lock, it can be removed by a safe drop.

Lock Coarsening

The continuous append () method above is locked once in each method, so it is better to put all successive append () methods in a synchronous block directly, which is the lock coarsening.

Lightweight Lock

To understand that a lightweight lock must be introduced from the memory layout of the Hotspot Virtual machine object (the Object header section). The object header of the hotspot virtual machine is divided into two parts, the first part is used to store the object's own runtime data, such as hash code, GC generational age, which is called "Mark Word", is the key to implement lightweight and biased lock. The other is used to store pointers to the data of the method area object type, and an additional section is used to store the length of the array if it is an array.
In a 32-bit hotspot virtual machine where the object is not locked, the 25bit in Mark Word's 32bit space is used for the hash code of the object in the inch, 4bit is used to store the object's generational age, 2bit is used to store the lock flag bit, 1bit is fixed to 0, In other states, the contents of the object are stored as follows:

Store Content flag Bit Status
Object hash code, object generational age 01 Not locked
Pointer to lock record 00 Lightweight locking
Pointer to a heavyweight lock 10 Expansion (heavy-lock)
Empty, do not need to log information 11 GC Flag
Biased to thread ID, biased timestamp, object generational age 01 can be biased

When the code enters the synchronization block, if the synchronization object is not locked, the virtual machine will first establish a space named lock record in the stack frame of the current thread to store the object's current copy of Mark Word (that is, displaced mark Word).

If this update fails, the virtual machine first checks to see if the object's mark word points to the stack frame of the current thread, and if the current thread already has a lock on the object, it can go straight to the synchronization block, or the lock object is already preempted by another thread. If more than two threads are contending for the same lock, then the lightweight lock is no longer valid, to be inflated to a heavyweight lock, the status value of the lock flag is changed to "Ten", and Mark Word stores a pointer to a heavyweight lock (mutex), and the thread that waits for the lock goes into a blocking state.

The unlocking process is also done through CAs, and if the object's Mark Word still points to the thread's lock record, the CAS operation replaces the object's current mark Word and the copied displaced mark Word from the thread, and if the replacement succeeds, the entire synchronization process is complete. If the substitution fails, indicating that another thread is attempting to acquire the lock, it is necessary to wake the locked lock while releasing the lock.

Lightweight locking is based on the empirical data that "for the most part, there is no competition during the entire synchronization period." If there is no competition, the mutex cost is avoided, and if there is competition, there is not only a mutex overhead, but also the cost of CAS operations.

biased lock

If a lightweight lock is a non-competitive use of CAS operations to eliminate the mutex used by synchronization, the bias lock is to eliminate the entire synchronization without competition, even if the CAS operation is not done.

A bias lock means that the lock will favor the first thread to get it, and if the lock is not fetched by another thread during the next execution, the thread that holds the biased lock will never need to be synchronized.

Assuming that the current virtual machine has a bias lock enabled, name, when the lock object is first fetched by the thread, the virtual machine will set the flag bit in the object header to "01", which is biased mode. Using the CAS operation, the thread ID of the lock is recorded in the object's Mark Word, and if the CAS operation succeeds, the virtual machine can no longer perform any synchronization operations when the thread that is holding the lock has entered the lock-related synchronization block each time. The bias mode is declared to end until another thread attempts to acquire the lock.

Depending on whether the lock object is currently locked, the undo bias (Revoke Bias) reverts to the status of unlocked (the flag bit is "01") or a lightweight lock (the flag bit is "00"), and subsequent synchronizations are performed as the lightweight lock described above. The state transitions are as follows:

Thread Safety and lock optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.