Java Central Plains Sub-operation is thread-safe argument is often referred to. Atomic operations, by definition, are not interrupted operations and are therefore considered thread-safe. There are actually some atomic operations that are not necessarily thread-safe.
The problem arises because minimizing the synchronization of keywords in code. Synchronization can damage performance, although this loss differs depending on the JVM. In addition, in modern JVMs, the performance of synchronization is gradually improving. Still, there is a performance penalty for using synchronization, and programmers will always try to improve the efficiency of their code, so the problem continues.
In Java, the assignment of a 32-bit or less-bit number is atomic. On a 32-bit hardware platform, other original types except double and long are usually represented using 32-bit, while double and long usually use 64-bit representations. In addition, object references are implemented using native pointers and are usually 32-bit. The operation of these 32-bit types is atomic.
These primitive types typically use 32-bit or 64-bit representations, which introduces another small myth: the size of the original type is guaranteed by the language. This is wrong. The Java language guarantees the range of tables of the original type, not the storage size in the JVM. Therefore, the int type always has the same number of table ranges. It is possible to use a 32-bit implementation on one JVM and 64 bits on another JVM. Again, it is emphasized that all platforms are guaranteed to be of a range of tables, and that 32 bits and smaller values are operated on atoms.
So, under what circumstances is atomic operation not thread safe? The main point is that they may indeed be thread-safe, but this is not guaranteed! Java threads allow a thread to save a copy of a variable in its own memory area. Allowing threads to work with local private copies rather than using main memory values every time is designed to improve performance. Consider the following class:
class RealTimeClock
{
private int clkID;
public int clockID()
{
return clkID;
}
public void setClockID(int id)
{
clkID = id;
}
//...
}
Now consider an instance of Realtimeclock and two threads calling Setclockid and Clockid at the same time, and the following sequence of events occurs:
T1 Call Setclockid (5)
T1 5 into its own private working memory
T2 call Setclockid (10)
T2 10 into its own private working memory
T1 call Clockid, it returns 5
5 is returned from T1 's private working memory.
Calls to Clocki should return 10 because this is set by the T2, but 5 is returned because the read-write operation is for private working memory rather than main memory. Assignment operations are of course atomic, but because the JVM allows this behavior, thread safety is not certain, and the JVM's behavior is not guaranteed.
Two threads have their own private copies, but not the same as main memory. If this behavior occurs, then private native variables and main memory must be consistent under the following two conditions:
1, variable use volatile declaration
2. The variable being accessed is in a synchronous method or a synchronized block
If the variable is declared as volatile, it will be consistent with main memory on each visit. This consistency is guaranteed by the Java language and is atomic, even if it is a 64-bit value. (Note that many JVMs do not implement the volatile keyword correctly.) You can find more information in the www.javasoft.com. In addition, if a variable is accessed in a synchronized method or in a synchronized block, the variable is synchronized when the lock is obtained at the entrance of the method or block and the lock is released when the method or block exits.
Using either method guarantees that the Clockid returns 10, which is the correct value. Variable access frequency is different then your choice of performance is different. If you update many variables, using volatile may be slower than using synchronization. Remember, if the variable is declared as volatile, it will be consistent with main memory on each visit. In contrast, when using synchronization, variables are only consistent with main memory when acquiring locks and releasing locks. But synchronization makes the code less concurrent.
If you update a lot of variables and don't want to have a loss of sync with main memory for each access or you want to exclude concurrency for other reasons, consider using synchronization.