The previous two summaries briefly illustrate some of the problems with synchronization, and there are two techniques to share in the use of the underlying synchronization mechanism: volatile keywords and threadlocal. Reasonable according to the scene to use these technologies, can effectively improve the performance of concurrency, the following attempt to combine their own understanding of this part of the content, there should be understanding of the deviation, I will try to improve their understanding while synchronizing the error of updating the article.
Perhaps after you know the mechanism of synchronized and object internal locking, you can improve the success rate of concurrent programs that write the correct synchronization, but you will encounter another big problem: Performance! Yes, the developers have come up with different excellent optimization scenarios for the potentially huge performance costs of synchronized: the most common is the breakdown of locks and the minimization of lock-holding time. Effective reduction of lock holding time for competitive threads intense calls can greatly improve performance, so do not easily declare synchronized on the method, you should add synchronized on the code block that needs to be protected. Another scenario is the size of the competing particles of the split lock, which competes with hundreds of threads for the lock of an object, less than a few or dozens of threads competing for locks on multiple objects, and the common application is the implementation of CONCURRENTHASHMAP, which has a similar array of lock objects to maintain the thread competition within each segment, The default 16 object locks, of course, provide parameters to adjust. This is a good example of a map that stores tens of thousands of instances. It is self-evident that the thread of competition is dispersed to a number of small competition, no longer all the heap in the doorway silly wait.
However, the performance cost of synchronized synchronization and similar mechanisms makes it impossible for developers to study the mechanism of locking and Low-cost synchronization to ensure concurrent performance. Volatile is considered a "lightweight synchronized," but using it simplifies synchronous coding, and the overhead is less synchronized than the JVM's not optimized competition thread, but abuse will not guarantee the correctness of the program. The two characteristics of the lock are: mutual exclusion and visibility. Mutual exclusion guarantees that only one thread holds the object lock for the shared data operation, thus guaranteeing the atomicity of the data operation, while visible ensures that the modified data is updated after the next thread obtains the lock. Volatile only guarantees unlocked visibility, but does not provide the assurance of atomic operation! This is because the volatile keyword is designed to prevent the value of the volatile variable from being put into the processor's register, which is then flush out of the processor cache and written to memory after the write value. It is not valid to limit the cache of the processor in this way, only reading the value from memory, and ensuring visibility. From this implementation you can see the use of volatile: multithreading a large number of reads, very small or one-time write, and there are other restrictions.
Because of its inability to guarantee "read-Modify-write" the atomic nature of such operations (of course, the implementation of the Java.util.concurrent.atomic package to meet these operations, mainly through the cas--comparison of the mechanism of exchange, subsequent attempts to write. , so a variable operation like the + +,--, +=,-=, even if the declaration volatile does not guarantee correctness. Around the theme of this principle, we can roughly sort out the conditions of volatile instead of synchronized: write operations on variables are not dependent on their state. So in addition to the operation just described, for example:
Java code
private volatile boolean flag;
if(!flag) {
flag == true;
}
Such an operation is also a violation of the volatile conditions of use, is likely to cause problems in the program. So a simple scenario with volatile is a one-time write, and a large number of threads reads and no longer changes the value of the variable (if so, it is not concurrent). The advantage of this keyword is the multithreading of reading, both to ensure that the low cost of reading (and the same as the single threaded programming variables), but also to ensure that the reading of the latest values. So using this advantage we can combine synchronized with low overhead read and write locks:
Java code
/**
* User: yanxuxin
* Date: Dec 12, 2009
* Time: 8:28:29 PM
*/
public class AnotherSyncSample {
private volatile int counter;
public int getCounter() {
return counter;
}
public synchronized void add() {
counter++;
}
}