Threadlocal, Volatile, synchronized, atomic keyword literacy for concurrent programming

Source: Internet
Author: User
Tags dashed line

Preface

For threadlocal, Volatile, synchronized, atomic these four keywords, I would like to mention to everyone must think of is to solve the multi-threaded concurrency environment in the sharing of resources, but to elaborate on each of the characteristics, differences, application scenarios, internal implementation, etc. But may be vague, can not say why, so, this article on these several key words to do some role, characteristics, the implementation of the explanation.

1, Atomic function

For the atomic Operation class, Java's concurrent and the contract package mainly provide us with so many commonly used: Atomicinteger, Atomiclong, Atomicboolean, atomicreference<T >.
For the atomic Operation class, the biggest feature is the use of the Lock-free algorithm to replace the lock in the case of multi-threaded concurrent operation of the same resource, so that the cost is small and the speed is fast, the atomic Operation class is implemented by atomic operation instruction, which can guarantee the atomicity of the operation. What is atomicity? For example, an operation i++, which is actually three atomic operations, first reading the value of I, then modifying (+1), and finally writing to I. So use the atomic atomic class operand, for example: i++; then it will allow other threads to operate on it when this is done, and this implementation is determined by lock-free+ atomic operation instructions.
Such as:
In the Atomicinteger class:

    publicintincrementAndGet() {        for (;;) {            intget();            int1;            if (compareAndSet(current, next))                return next;        }    }

As for the lock-free algorithm, it is a new strategy to replace the lock to ensure the integrity of the resources in the concurrency, Lock-free implementation has three steps:

1. Loop (for (;;), while)
2. CAS (Compareandset)
3. Fallback (return, break)

usage

For example, in the case of multiple threads manipulating a count variable, you can define count as Atomicinteger, as follows:

publicclass Counter {    privatenew AtomicInteger();    publicintgetCount() {        return count.get();    }    publicvoidincrement() {        count.incrementAndGet();    }}

An operation that increments the count by increment () in each thread, or some other action. So each thread accesses a secure, complete count.

Internal Implementation

Using Lock-free algorithm instead of Lock + atomic operation instruction to realize the security, integrity and consistency of resources in concurrency

2. Volatile function

Volatile can be seen as a lightweight synchronized, which can guarantee the "visibility" of variables in the case of multithreading concurrency, what is visibility? is to modify the value of the variable in the working memory of a thread, and the value of the variable is immediately echoed to the main memory, ensuring that all threads see that the value of the variable is consistent. So it has a great effect on dealing with synchronization problems, and it has less overhead than synchronized and lower cost of use.
for a chestnut: in the Write singleton mode, in addition to static internal classes, there is also a very popular way of writing is VOLATILE+DCL:

public   Class  Singleton {private  static  volatile  Singleton instance; private  singleton  () {} public  static  Singleton getinstance  () {if  (Instance = = null) {synchronized (singleton.class) {if  (instance = = null ) {instance = new  Singleton ();    }}} return  instance; }}

This singleton is shared by all threads, regardless of the thread in which it was created.

Although this volatile keyword solves the problem of synchronization in a multithreaded environment, it is also relative because it does not have the atomic nature of the operation, that is, it does not fit in the variable's write operation depends on the variables themselves. The simplest chestnut: count++ in a counting operation, the actual value of Count=count+1;,count depends on its value. So variables that use volatile modifiers have concurrency problems when doing such a series of operations.
To raise a chestnut: because it does not have the atomic nature of the operation, it is possible that the Line 1 line will have a count value of 4 when it is about to be written, and that the Line 2 line takes exactly the value of 4 before the write operation, so the Line 1 line is 5 when it completes its write operation, and the value of count in Line 2 is 4. Even if the Line 2 line has completed the write count or 5, and we expect the count to end to 6, there is a concurrency problem. And if Count is replaced with this: count=num+1; Assuming num is synchronous, then count has no concurrency problem, as long as the final value does not depend on itself.

usage

Because volatile does not have the atomic nature of operations, if a volatile-modified variable is doing something that relies on its own, there are concurrency problems, such as: count, which does not achieve any effect in a concurrency environment like this:

publicclass Counter {    privatevolatileint count;    publicintgetCount(){        return count;    }    publicvoidincrement(){        count++;    }}

In order for count to maintain the consistency of the data in the concurrency environment, you can add synchronized to the increment () to the modified:

publicclass Counter {    privatevolatileint count;    publicintgetCount(){        return count;    }    publicvoidincrement(){        count++;    }}
Internal Implementation

Assembly Instruction Implementation
You can read this article for more information: the principle of volatile implementation

3, Synchronized function

Synchronized is called a synchronous lock, is a simplified version of lock, because it is a simplified version, then the performance is certainly not as good as lock, but it is easy to operate, just in a method or the need to synchronize the code block wrapped inside it, then this code is synchronous, The code access for all threads to this area must first hold the lock in order to enter, or intercept outside waiting for the thread that is holding the lock and then acquire the lock entry, because it is based on this blocking strategy, so it is not very good performance, but due to operational advantages, just need to simply declare it, and the block of code it declares is also atomic in operation.

usage
    publicsynchronizedvoidincrement(){            count++;    }    publicvoidincrement(){        synchronized (Counte.class){            count++;        }    }
Internal Implementation

The re-entry lock reentrantlock+ a condition, so it is a simplified version of lock, because a lock can often correspond to multiple condition

4, ThreadLocal function

About Threadlocal, this class is not used to solve the problem of resource sharing in multi-threaded concurrency environment, it is different from the other three keywords, the other three keywords are from the thread external guarantee the consistency of variables, so that multiple threads access to the variable has consistency, Can better reflect the sharing of resources.

The design of threadlocal is not to solve the problem of resource sharing , but to provide local variables within the thread, so that each thread manages its own local variables, and the data of other threads does not affect me, does not affect each other, so there is no solution to the sharing of resources. , if it is to solve the resource sharing, then the result of other threading operations must I need to obtain, and threadlocal is to manage their own, the equivalent of encapsulated within the thread, for the thread to manage themselves.

usage

General use of threadlocal, the official recommended that we define as private static, as to why it is defined as static, which is related to memory leaks, and later.
It has three exposed methods, set, get, remove.

 Public classThreadlocaldemo {Private StaticThreadlocal<string> ThreadLocal =NewThreadlocal<string> () {@OverrideprotectedStringInitialValue() {return "Hello"; }    };StaticClass Myrunnable implements runnable{Private intNum Public myrunnable(intNUM) { This. num = num; } @Override Public void Run() {threadLocal.Set(string.valueof (num)); System. out. println ("Threadlocalvalue:"+threadlocal.Get()); }    } Public Static void Main(string[] args) {NewThread (NewMyrunnable (1));NewThread (NewMyrunnable (2));NewThread (NewMyrunnable (3)); }}

The results are as follows, and these threadlocal variables are managed internally by the thread and do not affect each other:

Threadlocalvalue:2
Threadlocalvalue:3
Threadlocalvalue:4

For the Get method, the default returns null if there is no set worth of threadlocal, all if there is an initial value we can override the InitialValue () method and return the initial value if no set is worth the call of get.

Note : Threadlocal after the thread has been used, we should call the Remove method manually, remove its internal values, which will prevent memory leaks and, of course, be set to static.

Internal Implementation

There is a static class Threadlocalmap inside the threadlocal, and the thread that uses threadlocal will bind to the Threadlocalmap and maintain the map object. The function of this threadlocalmap is to map the value corresponding to the current threadlocal, which is a weak reference to the current threadlocal: WeakReference

Memory leak Issues

For threadlocal, the memory leak has been involved, that is, when the thread does not need to manipulate the value within a threadlocal, it should be removed manually, why? Let's take a look at the contact diagram of threadlocal with thread:
This figure is from the network:

Where the dashed line represents a weak reference, as can be seen from the graph, a thread maintains a Threadlocalmap object, and the map object's key is provided by a weak reference to the Threadlocal object that provides the value, so this is the case:
If Threadlocal is not set to static, because the life cycle of the thread is unpredictable, this causes the system GC to reclaim it while the Threadlocal object is recycled, and it must be null for the key at this time. This causes the key to have no value, and value is referenced by the thread before it, so there is a key of NULL, and a strong reference to value causes the entry to be recycled, resulting in a memory leak.

So to avoid the memory leak method, is for threadlocal to be set to static, in addition to this, you must not use the value of the thread is manually remove the value of the threadlocal, so that entry can be in the system GC normal recovery, For Threadlocalmap, recycling occurs after the current thread is destroyed.

Summary

About the volatile keyword has visibility, but does not have the atomic nature of the operation, and the synchronized is slightly more expensive than the volatile resource, but it can guarantee the atomicity of the variable operation, ensure the consistency of the variables, and the best practice is used together.

1, for the emergence of synchronized, is to solve the problem of multi-threaded resource sharing, the synchronization mechanism adopted a "time-to-space" way: Access serialization, object sharing. The synchronization mechanism is to provide a copy of the variable that all threads can access.

2, for the appearance of Atomic, is through the atomic operation instruction +lock-free completes, thus realizes the non-blocking concurrency problem.

3, for volatile, for multi-threaded resource sharing problem solved part of the demand, in the case of non-dependent operation, the change of the variable will be visible to any thread.

4, for the emergence of threadlocal, is not to solve the problem of multi-threaded resource sharing, but to provide local variables within the thread, eliminating the need for parameters to pass this unnecessary trouble, Threadlocal adopted the "space for Time" way: Access Parallelization, the object of the exclusive. Threadlocal provides a unique variable for each thread, each of which does not affect each other.

Threadlocal, Volatile, synchronized, atomic keyword literacy for concurrent programming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.