Thread Safety
When multiple threads access an object, the This object is thread-safe if it does not take into account the scheduling and alternating execution of these threads in the context of execution, the need for additional synchronization, or any other coordinated action by the caller that invokes the object's behavior to obtain the correct result. Thread safety in the Java language
Thread safety is based on multithreaded shared data, no shared data with other threads, from the point of view of thread security, whether the program is serial execution or multithreaded execution is completely indistinguishable.
The shared data for various operations can be grouped into the following five categories in terms of thread-safe "security" from strong to weak: immutable, absolute thread safety, relative thread security, thread-compatible, and thread-opposite. Not to change
1 in the Java language immutable objects must be thread-safe, whether the object's method implementation or side of the caller, no need to adopt any of the thread security measures. such as the final keyword, as long as an immutable object is correctly constructed (no this reference escapes, i.e. the object is not yet constructed to complete the This reference is released), the visible state outside of it will never change and will never be seen as inconsistent in multiple threads.
Shared data is the basic data type, and final modification ensures that it is immutable. For an object, it is necessary to ensure that the object's behavior does not have an impact on its state (the simplest is to declare the variable with state in the object final, which is immutable after the constructor ends). For example, for an object of the Java.lang.String class, a typical immutable object, we call its substring (), replace (), concat (), and so on, do not affect its original value, but return a newly constructed string object. In addition, there are enumeration types, as well as some subclasses of java.lang.Number, such as the numeric wrapper class double and long. Absolute thread Safety
Regardless of the run-time environment, callers do not need any additional synchronization measures.
such as Java.util.Vector is a thread-safe container, its get,add,size and other methods are synchronized modified, inefficient, but indeed safe. But does not mean that the call will never need to sync means, such as a thread in multiple threads in the wrong time to delete the elements, can cause boundary anomalies. At this point the deletion needs to go into the synchronized block processing. Relative Thread safety
That is, we usually speak of thread safety, it is necessary to ensure that the object is a separate operation is thread-safe, the call does not need to do additional safeguards, but for a specific sequence of successive calls, you may need to use additional synchronization means on the caller side to ensure the correctness of the call.
Most of Java is part of this category, such as the Vector,hashtable,collections Synchronizedcollection () method of packing collections, and so on. Thread-compatible
Is that the object itself is not thread-safe, but it is ensured that the object is safe to use in the concurrency environment by using synchronization methods correctly on the caller side. Usually said is not the thread safety, said is this kind, like Arraylist,hashmap and so on. Thread-Opposite
refers to whether or not the callee has taken synchronous measures, can not be used in a multithreaded environment concurrent use of code, because the Java language is inherently multi-threaded features, threading the exclusion of multithreaded code rarely appear, and is usually harmful, should be avoided.
For example, the suspend () and resume () methods of thread, if two threads hold one thread object at the same time, one tries to break, the other tries to recover, and if concurrency occurs, the target thread is at risk of deadlock, regardless of whether the call was synchronized. If the thread that suspend interrupts is the one that is going to execute the resume, the deadlock must be generated. The JDK has therefore declared that suspend and resume have been discarded. Common also have System.setin (), System.setout (), System.runfinalizersonexit () and so on. Thread-Safe Implementation methods mutex synchronization
Mutex synchronization is a common method of concurrency correctness assurance. Synchronization is the guarantee that shared data can only be used by one thread at a time when multiple threads concurrently access shared data. and mutual exclusion is a means to achieve synchronization, critical areas, mutual exclusion and signal volume are the main mutually exclusive implementation methods. Mutual exclusion is the cause, synchronization is the result, mutual exclusion is the method, synchronization is the purpose.
In Java, the most basic mutex synchronization means the Synchronized keyword, synchronized keyword after compiling, will be synchronized in the code block before and after the formation of the Monitorenter and monitorexit these two byte code instructions, These two bytecode require a reference type parameter to indicate the object to lock and unlock. If the object argument is defined, that is the reference to the object, otherwise the corresponding object instance or class object is used as the lock object according to whether the synchronized is decorated with an instance method or a classes method. Monitorenter will try to get the lock on the object, the lock object is not locked or the current thread already has a lock, the lock counter is 1,exit minus 1, and the lock is released when 0.
。。。 The Synchronized sync block is reentrant for the same thread and does not appear to lock itself up. A synchronized code block blocks the entry of a subsequent thread before it has finished executing. Because Java threads are mapped to the operating system's native thread, if you want to block or wake a thread, you need the operating system to help, you need to switch from user state to nuclear mentality, state conversion will take a processor time than user code execution time is longer, So synchronized is a weight-level operation.
In addition to synchronized, you can use the Java.uitl.concurrent Lock (Reentrantlock) in the package to achieve synchronization. The Reentrantlock is expressed as an API-level mutex (lock and unlock method with try/finally statement block), synchronized as the primary grammatical level of the mutex.
There are some advanced features added to Synchronized,reentrantlock: Waiting to be interrupted. You can implement fair locks, and locks can bind multiple conditions.
Wait to be interrupted: the thread holding the lock does not release the lock for a long time, while the waiting thread can choose to discard the wait and handle other things, and the interruptible feature is useful for handling synchronized blocks with very long execution time.
Fair Lock: means that when multiple threads are waiting for a lock, they must obtain the lock in sequence according to the order in which the lock is requested; a non-fair lock does not guarantee this, and any thread that waits for a lock when the lock is freed has the opportunity to acquire a lock. Synchronized is unfair, Reetrantlock is also unfair by default, and you can use a constructor with a Boolean value to require a fair lock.
Lock binding multiple conditions: A Reetrantlock object can bind multiple condition objects at the same time, and in synchronized, the wait and notify or Notifyall of the lock object can implement an implied condition. If you want to be associated with more than one condition, you have to add a lock, and reetrantlock does not need to, just call the Newcondition () method multiple times.
JDK1.5 in a multi-threaded environment, synchronized throughput is very serious, while Reetrantlock is basically maintained at the same level of stability. JDK1.6 on the performance of synchronized and Reetrantlock flat, the virtual machine is more inclined to the original synchronized, so still try to use synchronized. Non-blocking synchronization
The main problem with mutex synchronization is the performance problems caused by thread blocking and awakening, so this synchronization is also known as blocking synchronization, is a pessimistic concurrency strategy, that is, as long as not to do the correct synchronization measures, it will certainly be a problem, regardless of whether there will be competition, must be added lock, user state of the nuclear mentality of conversion, Maintain the lock counter and check if there are blocked threads that need to be awakened, and so on.
Non-blocking synchronization: Optimistic concurrency strategy based on conflict detection, that is, the operation is done first, if no other thread is competing for shared data, the action succeeds; If the shared data is contention and conflicts arise, then other compensation measures are taken (the most common compensatory measure is constant retry until successful), Many implementations of this optimistic concurrency strategy do not need to suspend the thread.
Optimistic concurrency policies require "the development of hardware instruction sets" in order to proceed. Because operation and conflict detection are required, the two steps are atomic and can only be guaranteed by hardware, and the hardware guarantees that a behavior that is semantically required to be repeated is done by a single processor instruction:
-> Test and set
-> Gain and increase
-> Exchange
-> comparison and Exchange (CAS)
-> Load link/condition store LL/SC
CAS requires 3 operands, respectively, memory location V, old expected value A and new value B. When CAs executes, if and only if v conforms to a, the processor updates V with B, otherwise the update is not performed, but whether V is updated or not, it returns v. This process is an atomic operation. JDK1.5 does not use CAS operations. However, this does not cover all of the usage scenarios for mutexes, and CAS is semantically imperfect: If a V is a at the time of the initial reading and a when it is ready to be assigned, we can say that its value has not been changed by other threads. If you have changed to B during this period and changed back to a, the CAS will mistakenly assume that it has never been changed. This vulnerability is the ABA issue with CAS operations.
When the J.U.C package provides a tagged atomic reference class atomicstanmpedreference, it is possible to ensure the correctness of CAs by controlling the version of the variable value. In most cases, however, the ABA problem does not affect the concurrency of the program, and the traditional mutex synchronization may be more efficient than the atomic class if it is to be resolved. No sync scheme
To ensure thread safety, there is no need to synchronize, there is no causal relationship between the two. Synchronization is only a means of ensuring the correctness of shared data contention. If you do not involve sharing data, you do not need to sync, which is safe in itself. The following two: Reentrant code reentrant
Also called pure code, you can break it at any time the code executes, instead of executing another piece of code, and the original program does not have any errors after the control returns. Reentrant is a more basic feature that ensures thread safety, that is, all reentrant code is thread safe. But not all thread-safe code can be reentrant.
Reentrant code features: Do not rely on the data stored on the heap and the public system resources, the amount of state used is passed in the parameter, do not call the method of not reentrant. If the return result of a method is predictable, as long as the same data is entered and the same result can be returned, it satisfies the requirement of reentrant, that is, thread-safe. Thread native storage Thread local Storage
If the data required in a section of code must be shared with other code, see if the shared data is guaranteed to execute on the same thread. It is guaranteed that the visible range of shared data can be controlled within the same thread, so that there is no need for synchronization to ensure that there is no data contention between threads. The architectural pattern of the consumer queue (producer-consumer model) will say that the consumption process of the product is consumed as much as possible in one thread, one of the most important examples of which is the "one request for a server thread" approach in the classic web interaction model, So many Web server applications can use thread-local storage to address thread-safety issues.
(emphasis ...) )
You can use the Java.lang.ThreadLocal class to implement the functionality of thread-local storage.
There is a Threadlocalmap object in the thread object for each of the threads. This object stores a set of k-v value pairs with Threadlocal.threadlocalhashcode as the key, with a local thread variable as the value, and the Threadlocal object is the access gate of the current thread's threadlocalmap, and each Threadlocal object contains a A unique Threadlocalhashcode value that can be used to retrieve the corresponding local thread variable in the thread k-v value pair. Lock optimization
Efficient concurrency, an important improvement in jdk1.5-jdk1.6. Various lock optimization techniques are designed to efficiently share data between threads, and to solve competition problems, thus improving the execution efficiency of programs. Self-rotating lock and self-adaptive spin
The most important effect of mutex synchronization is the implementation of congestion, the suspend and recovery threads need to be done in the kernel state, it is not worthwhile.
If the physical device has more than one processor that allows two or more threads to execute concurrently in parallel, let the thread requesting the lock "wait for a while" without abandoning the processor's execution time to see if the thread holding the lock will soon release the lock. In order for the thread to wait, just let the thread perform a busy loop (spin), the technology is the spin lock. *jdk1.4.2 is introduced but is closed by default, JDK1.6 is opened by default. Spin waiting cannot replace blocking. The short time effect is very good, but the long time will consume the processor resources in vain, but will not do any useful work, but brings the performance the waste. Therefore, the spin waiting time must have a certain limit, more than the number of times has not successfully acquired the lock, you should use the traditional way to suspend the thread, the default is 10 times, can be modified by-xx:prebloclspin *.
JDK1.6 introduces an adaptive spin lock, which means that the spin time is no longer fixed and is determined by the previous time in the state of the owner of the lock with a lock spin. On the same lock object, if the lock has just been successfully acquired, and the thread holding the lock is running, the virtual machine will assume that the optional is also likely to succeed again, which will allow the optional wait time to be relatively longer. If a lock is rarely successful, acquiring the lock later may omit the spin process to avoid wasting processor resources. Lock elimination
When the virtual machine Just-in-time compiler is running, it requires synchronization on some code, but it is detected that there is no way to eliminate the lock that is competing for shared data. The main judgment of lock elimination is based on data support from escape analysis.
If a piece of code, all the data on the heap will not escape to be accessed by other threads, they can be treated as data on the stack, considered to be a thread-private, synchronization lock naturally need not.
For example, string concatenation
A piece of code that does not appear to be synchronized public
string concatstring (String s1,string s2,string S3) {return
s1+s2+s3;
}
, Javac automatically optimizes string concatenation and translates to successive append operations of StringBuffer objects before JDK1.5 (the operation has a synchronized block, the lock is a SB object, but its dynamic scope is limited within the Concatstring method). SB does not escape to be accessed by other threads outside the method, so immediate compilation ignores all synchronizations and executes directly, JDK1.5 and later, to the continuous append operation of the StringBuilder object.
public string concatstring (string s1,string s2,string S3) {StringBuffer sb=new stringbuffer
();
Sb.append (S1);//append operation has a synchronized block, the lock is SB object
sb.append (s2);
Sb.append (S3);
return sb.tostring ();
}
Lock coarsening
If the virtual machine detects a series of continuous operations on the same object repeatedly lock and unlock, and even the lock operation is in the loop body, even if there is no thread competition, frequent mutex synchronization can lead to unnecessary performance loss, such as continuous append operation.
Lock coarsening, if a virtual machine detects that a string of fragmented operations lock up the same object, it expands the scope of the lock synchronization to the outside of the entire sequence of operations, so that only one additional lock is required.