Why is the use of volatile lower than the cost of synchronization?
The cost of synchronization is mainly determined by its coverage, and if you can reduce the coverage of synchronization, you can significantly improve program performance.
And volatile's coverage is variable-level only. Therefore, its synchronization cost is very low.
What is the volatile principle?
Volatile's semantics, in fact, is to tell the processor, do not put me into working memory, please directly in main memory operation of me. (The working memory is detailed in the Java memory model)
Therefore, when multi-core or multithreading accesses the variable, it will directly manipulate main memory, which is essentially a variable sharing.
What are the advantages of volatile?
1, larger program throughput
2, less code to implement multithreading
3, the program's scalability is better
4, better understanding, do not need too high cost of learning
What's the disadvantage of volatile?
1, easy to go wrong
2, more difficult to design
Dirty data problem in volatile operation
Volatile can only guarantee the visibility of variables and cannot guarantee atomic nature.
Volatile's race condition example:
public class TestRaceCondition {
private volatile int i = 0;
public void increase() {
i++;
}
public int getValue() {
return i;
}
}
When multithreading executes the increase method, does it guarantee that its value will be linearly incremented?
The answer is in the negative.
Reason:
The increase method here, the operation is i++, i.e. i = i + 1;
For i = i + 1, operations in multiple threads need to change the value of I.
If I have taken the latest value from memory, but not with 1, the other threads have assigned the result of the operation to I several times.
At the end of the current thread, the results of the previous operands are overwritten.
That is, execute 100 times increase, may result is < 100.
In general, this situation requires high pressure and concurrency to occur.