Application of volatile in Java concurrency programming

Source: Internet
Author: User
Tags visibility volatile

Synchronized and volatile play an important role in multi-threaded concurrent programming. Volatile is a lightweight synchronized, which guarantees the visibility of shared variables in multi-processor development, which means that when a thread modifies a shared variable, another thread can read the modified value. If the volatile variable is used properly, it will cost less to use and execute than synchronized. Because he does not cause the thread context to be dispatched and switched. This section will receive detailed implementation of the volatile principle.

I. Definition and implementation principles of volatile

The third version of the Java language Specification defines volatile as follows: The Java programming language allows threads to access shared variables, and in order to ensure that shared variables can be updated accurately and consistently, the thread should ensure that the variable is obtained separately using exclusive locks. The volatile provided by Java, in some cases, is more convenient than a lock. If a field is declared as a Volatile,java thread, the memory model ensures that all threads see that the value of this variable is consistent.

Before we understand the principle of volatile implementation, let's take a look at the CPU terms and names associated with their implementation principles. See table below:

Terms English words Term description
Memory barrier Memory barriers is a set of processor directives that enable sequential throttling of memory operations
Buffer lines Cache line The smallest storage unit that can be allocated in the cache, the entire buffer line is loaded when the processor processes the cache line, and multiple main memory read cycles are required
Atomic operation Atomic operations A non-disruptive or a series of operations
Buffer row Padding Cache line Fill When the processor recognizes that the operand read from memory is buffered, the processor reads the entire buffer line to the appropriate cache
Cache Hit Cache hit If the memory location of the cache row fill operation is still the next time the processor accesses the address, the processor reads the operand from the cache instead of the memory read
Write hit Write hit When the processor writes the operand back to the area of a memory cache, it first checks to see if the cached memory address is in the cache line, and if there is a valid cache line, the processor writes the operand back to the cache instead of writing back to memory, which is called a write hit.
Write missing Write miss the Cache A valid cache line is written to an invalid memory area

How does volatile ensure visibility? Let's get the JIT compiler-generated assembly instructions under the X86 processor to see what the CPU will do when it writes to volatile.

The Java code is as follows:

New // instance is a volatile variable

After conversion to assembly code, the following:

0x01a3de1d:movb $0x0,0x1104800 (%esi);
0x01a3de24:lock Addl $0x0, (%ESP);

A shared variable modified with a volatile variable is written with a second line of assembly code, and by looking at the IA-32 architecture software Developer's Manual, the lock prefix command causes two things to happen under a multi-core processor:

1. Writes data from the current processor cache row to system memory

2. This write memory operation will invalidate the data cached in the other CPU's memory address.

To improve processing speed, the processor does not communicate directly with the memory, but instead reads the system memory data first into the internal cache (L1,L2 or otherwise), but the operation does not know when the memory will be written, and if the variable that declares the volatile is written, The JVM sends a lock prefix instruction to the processor. Writes the data of the cache row where the variable resides to memory. But just write back to the memory, add the other CPU's cache or old, and then do the same thing. Therefore, under multi-processing, in order to ensure that the contents of each processor cache are consistent, the cache consistency protocol is implemented, that is, each processor can sniff the data propagated on the bus to check if its cached value is out of date, and when the processor finds that the memory address corresponding to its cache line has been modified, The cache line of the current processor is set to an invalid state, and when the processor modifies the data, it will re-read the data from the system memory into the processor cache.
Two implementation principles of volatile

The 1.Lock prefix instruction causes the processor cache to write back to memory

2. Cache write-back to memory of one processor causes the cache of other processors to be invalid

  

Application of volatile in Java concurrency programming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.