From JMM model Ramble to Happens-befor principle

Source: Internet
Author: User

First of all, the code does not use the IDE to knock, so do not care about the format, can read the line
JMM Memory Model:
What is JMM?

JMM is a model that defines the abstract relationship between the thread of the JVM and the main memory, that is, shared variables between threads are stored in main memory, and each thread has its own working memory

What is the happens-befor principle?

Before we say the happens-befor principle, we have to talk about the problem of JMM, as mentioned above, each thread has its own working memory, then we take a code example to see

public class test{
int a=1;
public static void Main (String []args) {
for (int i=0;i<=100000;i++) {
New Thread (()->{

a++;

}). Start ();

}
System.out.println (a);
}

}
OK, as you can see, here is an open 100,000 thread to do the self-increment operation, the result is 99130, not the expected 100000, then this is why? As you all know, threads are the smallest unit of CPU running, so that is to say, in the case of multithreading, the CPU will go to run randomly (Even if priority is set, it is only a matter of weight and cannot guarantee a strong order
And the task must be executed), so, because our a++ is not a cause of atomic manipulation (actually 4 steps), that is to say, it is very likely that when a thread gets a CPU time slice, the CPU will first read a to memory, at this time assume a=1 then get a temporary variable, The temporary variable is then incremented, and then the structure is returned to main memory, but if you create a
Temporary variable but did not do the self-increment operation, the CPU time slice suddenly switch to the top of another thread, this time the line Chengcheng work done by the self-increment operation, at this point a=2, then the CPU and then cut back to the previous thread, because there is a program counter in the thread, the current thread to record which line of code, So this time the first thread continues to do +1 operations, but at this point
Because the first thread in the working memory of A is still 1, so this time thread a after +1 or 2, and then brush to the main memory, at this time a=2. So these two threads run a a++ operation each time, but the A in main memory is only added once.
So how do you avoid this situation? We need our happens-befor principle now.

Happens-befor principle:
1: The program must be run in accordance with the order of writing, can not be ordered re-ordering, the command rearrangement will result in what consequences?
public class test{
int a = 0;
Boolean B = false;
public void Write () {
A=1;
B = true;
}
public void Read () {
if (b) {
A = a+1;
}
}
}
When the instructions are re-queued, they may be
public class test{
int a = 0;
Boolean B = false;
public void Write () {
B = true;
A=1;

}
public void Read () {
if (b) {
A = a+1;
}
}
}
If there are two threads at this time
New Thread (
Write ();
). Start ();
New Thread (->{
Read ();
}). Stert ();
Assuming that the write () method thread is definitely preceded by the read () method execution, this may cause the Read method to enter at b=true time, and cause a last = 1, but the original intent of our code is actually a=1 priority execution, a= The 1 situation would be inaccessible because the bool of the Read method did not become true so the command rearrangement has
It's interfering with our code intent.

So under what circumstances does the instruction not rearrange? Two, one is that the upper and lower code has dependencies such as:
int a = 1;
int b = a+1;
There will be no reflow at this time;
Another is the use of the famous volatile keyword, which uses the nature of the memory barrier to ensure that the instructions will not reflow, and finally to expand the knowledge, that is, long and double this 64-byte data type, when read into the working memory is not atomic, but read 32 bytes each time, Last two reads, but if you add the volatile keyword, then the memory barrier
Guaranteed to read 64 bytes at once.

2: A lock unlock, must be in the program before the locking, that is, I do not understand the lock, then you do not want to lock, to ensure that the serial
3: For shared data, the previous thread's modifications to it must be visible to the thread that subsequently arbitrarily operates it
4: transitivity, assuming there are three operations, A,B,C can be understood as a happens befor b,b happens befor c, then a happens befor C;

Ok,volatile in addition to memory barrier, in fact, there is another role, is the 3rd, that is, to ensure the visibility, it is how to guarantee it? In fact, it is to remove the working memory, that is, every time the CPU read data must be in the main memory, so that after a thread modified data, He is visible to all threads. Unfortunately, this visibility does not guarantee thread safety, because thread safety requires two guarantees, one visibility, and one atomicity.
Assuming that there is a thread 1, a thread 2, a shared variable a=1, at which time thread 1 will get a working memory to do a++ operation, in the period when it has not yet returned thread B also took a to work memory do + + operation, then whoever returns first, A is just one operation. Therefore, volatile can only guarantee the thread safety of those assignment operations, such as: Boolean bool = true;

Finally, we recommend that you look at the source of CAs, using volatile plus optimistic lock, to achieve the need to synchronized can also ensure thread safety.

From JMM model Ramble to Happens-befor principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.