The volatile keyword for Java concurrency mechanism

Source: Internet
Author: User
Tags intel core i7

In multi-threaded concurrent programming, synchronized and volatile are important keywords, in short, synchronized hold and code block in the synchronization of code, Valatile to ensure the visibility of shared variables.

The definition and implementation principle of volatile:

The Java language allows threads to access shared variables, and in order to ensure that shared variables can be prepared and consistently updated, the thread should ensure that the variable is obtained separately through exclusive locks. The Java language provides the volatile, and we can think of it as a lightweight lock.

The volatile underlying implementation is because we have a compilation code called lock when we write to the shared variable of the heap volatile variable rhetoric, and the lock prefix instruction will cause two things under the multi-core processor.

1. The data of the current processor cache line is written back to the system content.

2. This write-back operation causes invalid data for the address of the other CPU that caches the memory address.

public          static void Main (string[] args) throws interruptedexception {Thread1 T1 = new Thread1 ();                 T1.start ();          Thread1 t2 = new Thread1 ();         T2.plzstop ();           T2.start ();          Thread1 t3 = new Thread1 ();      T3.start ();        }}class Thread1 extends Thread {private volatile Boolean flag = true;          @Override public void Run () {int i = 0;              while (flag) {i++;            try {System.out.println (i);              } catch (Exception e) {e.printstacktrace ();              } if (i > 9) {flag = false;      }}} public void Plzstop () throws interruptedexception{flag = false; }  }
The above code should only execute loops within two threads, but that is not the case because we are in a multithreaded situation. The processor does not know when we changed the value of the flag, so it is not timely to give the corresponding. If you add the Volatite keyword, let's talk about the principle of volatile implementation.

To improve processing speed, the processor does not communicate directly with the memory but first reads the system memory data into the internal cache before it operates, but does not know when to write back to memory. If a volatile variable is written, the JVM sends a LOCA prefix to the processor, stating that the data in the cache row where the variable is located is written back to the system memory. However, even if you write to memory, if the other processor's cached value is still old, then the operation is still a problem, so, under multiprocessor, in order to ensure that the cache of each processor is consistent, the cache consistency protocol will be implemented, when the processor found its own cache line corresponding to the memory address is modified, The current cache line is set to an invalid state, and when the processor modifies the data, it will re-read the data from the system memory into the processor cache.

When we execute the flag variable under multi-threading, he will copy the variable into our cache, so our thread reads this variable in the cache, and when a thread modifies the value of the variable, The flag variable in the memory area can cause data to be out of sync if it cannot be responded to in a timely manner.

The following is a detailed explanation of the principles of volatile implementation.

The 1:lock prefix instruction causes the processor cache to write back to memory. The loca prefix instruction locks the cache and always declares the lock# signal on the bus during a lock operation. He locks a specific cache, which is called "cache lock" when the cache is written back to memory and uses a cache coherency mechanism to ensure the atomicity of the modification. That is equivalent to the synchronization of our write cache action, relative to the synchronized is more lightweight.

2: Cache write-back to memory of one processor causes the cache of other processors to be invalid. This means that when we cache the write-back, the processor uses sniffer technology to guarantee its internal cache, and the cached data in the system memory and other processors is kept consistent on the bus. This means that when our cache changes the flag State, it will force the flag State in cache two and cache three to be invalid, and when we access the same memory address next time, force the flag variable to be filled in from memory;

Use optimization for volatile

Append byte optimization performance

This approach looks magical, but if you understand the processor architecture in depth, you can understand the mystery, because for Intel Core I7, core and some other processors, the cache line is 64 bytes wide and does not support partially populated rows, which means that if the head and tail nodes of the queue are less than 64 bytes, The processor will say that they all read the same cache line, and each processor caches the agreed header and tail nodes under multiple processors. Use Append to 64 bytes of the way to fill the cache area, is to avoid the head and tail loading to a cache line, so that heads and tails do not lock each other, affecting efficiency;

However, when using, it is important to note that the cache line is not a 64-byte processor when you do not use this method, if it is 32 processor, it is good to fill 32 bytes. If the shared variable is not read and written frequently, then there is no need to populate the byte;

The volatile keyword for Java concurrency mechanism

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.