Fix Java memory model, part 2nd--brian Goetz

Source: Internet
Author: User

Transfer from Java concurrency Guru Brain goetz:http://www.ibm.com/developerworks/cn/java/j-jtp03304/(Chinese address)

Http://www.ibm.com/developerworks/java/library/j-jtp03304/index.html (English address)

What changes will be JMM in JSR 133?

The JSR 133, which has been active for nearly three years, recently released a public recommendation on how to fix the Java memory model, JMM. In part 1th of this series, columnist Brian Goetz mainly introduces several serious flaws in the initial JMM, which have led to some of the most difficult conceptual semantics that were originally thought to be simple. This month, he describes how volatile and final semantics change in the new JMM, and these changes make their semantics consistent with the intuition of most developers . Some of these changes have already occurred in JDK 1.4, while others have been changed to JDK 1.5. Please share your thoughts with the author and other readers in this discussion forum.

Writing concurrency code is a difficult thing to do, and language should not increase its difficulty. Although the Java platform has included support for threads from the outset, including a plan to provide "write once, run everywhere" guaranteed, cross-platform memory models for programs that are properly synchronized, there are some vulnerabilities in the original memory model. While many Java platforms provide more assurance than JMM requires, the vulnerability in JMM makes it impossible to easily write concurrent Java programs that can run on any platform. So in May 2001, a JSR 133 was set up to repair the Java memory model. Last month, I discussed some of these vulnerabilities, and this month we'll talk about how to block them.

Post-Repair visibility

One of the key concepts needed to understand JMM is visibility (visibility) --How do you know when thread a executes somevariable=3 that other threads can see the value 3 written by thread A? There are a few reasons why other threads cannot immediately see the value of somevariable 3: It may be that the compiler reordered the instructions for more efficient execution, or the somevariable is cached in the register. or its value is written to the cache of the write processor, but it is not flushed to main memory, or there is an old (or invalid) value in the cache of the read processor. The memory model determines when a thread can reliably "see" the Write to a variable by another thread . In particular, the memory model defines the semantics of volatile , synchronized , and final that guarantee the visibility of memory operations across threads. (in particular, the memory model defines semantics for volatile , synchronized , and final< /code> that make guarantees of visibility of memory operations across threads.)

When a thread exits a synchronization block to release the associated monitor, JMM requires that the local processor buffer be flushed into main memory. (In fact, the memory model does not involve caching-it involves an abstraction (local memory) that surrounds the cache, the registry, and other hardware and compilation optimizations.) Similarly, as part of getting monitoring, when a synchronization block is entered, the local cache fails, allowing subsequent reads to go directly to the main memory instead of the local cache . This process ensures that when a variable is written by a thread in a synchronization block protected by a given monitor and is read by another thread in a synchronization block protected by the same monitor, the write to the variable is visible to the read thread . If there is no synchronization, JMM does not provide this guarantee-this is why synchronization (or its younger compatriots) must be used when multiple threads access the same variable volatile .

New assurance of volatile

volatileThe original semantics only guarantees that the volatile read and write of the field is done directly in main memory instead of the register or the local processor cache, and that the actions of the thread on the volatile variable are in the order required by the thread. In other words, this means that the old memory model only guarantees the visibility of the variable being read or written, and does not guarantee the visibility of other variables being written. Although it can be easily implemented, it is not as useful as originally conceived.

Although read and write to volatile variables cannot be reordered with read and write to other volatile variables, they can still be reordered with read-write to the nonvolatile variable. In part 1th, we describe how the code in Listing 1 (in the old memory model) is not enough to ensure that thread B sees configOptions and passes the configOptions correct values for all variables (such as elements) that are indirectly accessible Map , because configOptions the initialization may already be with volatile variable is reordered.

Listing 1. Use a volatile variable as the "guardian"
Map configoptions;char[] Configtext;volatile boolean initialized = false;//in Thread aconfigoptions = new HashMap (); confi Gtext = Readconfigfile (fileName);p rocessconfigoptions (Configtext, configoptions); initialized = true;//in Thread Bwhile (!initialized)   sleep ();//Use ConfigOptions

Unfortunately, this is a volatile common use case- using a volatile field as the "guardian" indicates that a set of shared variables has been initialized . It makes sense for the JSR 133 Expert Group to decide that volatile read and write cannot be reordered with other memory operations-it can accurately support this and other similar use cases . under the new memory model, if thread a writes to the volatile variable V and thread B reads V, all variable values visible by thread A are now guaranteed to be thread when writing v > B is visible . The result is greater semantics, at the cost of a greater volatile impact on performance when accessing volatile fields.

What happened before that?

Such operations as reading and writing variables are ordered in the thread according to the so-called "program order"-the order in which the semantics of the program declare what they should happen. (the compiler actually has some freedom to use program order in the thread-as long as the as-if-serial semantics are preserved.) Operations in different threads do not necessarily have to be sorted with each other--if you start two threads and they are not synchronized for any public monitor, or if any public volatile variables are involved, it is completely impossible to Accurately predict the order of operations in one thread (or visible to a third thread) relative to the operation in another thread.

In addition, the sort guarantee (ordering guarantees) is created when a thread is started, one threads joined another thread, a thread obtains or releases a monitor (enters or exits a synchronization block), or a thread accesses a volatile variable. JMM describes the order guarantees that a program uses to synchronize or volatile variables to coordinate activities in multiple threads. The new JMM informally defines a sort called Happens-before , which is a partial order of all operations in the program, as follows:

    • Each action in the thread happens-before Every action that occurs in this thread after the program order (that is, the operation in the same thread is executed in the order of the Code)
    • Unlocking the monitor Happens-before All subsequent locks on the same monitor
    • Write happens-before for volatile fields each subsequent read of the same volatile
    • Calls to a thread Thread.start() happens-before All operations in the thread that is started
    • All the operations in the thread happens-before Thread.join() All other threads from the successful return of this thread

The third of these rules, which controls the reading and writing of volatile variables, is new and fixes the problem with the example in Listing 1. Since the write to the volatile variable occurs after the initialized initialization, configOptions configOptions the use is occurring after initialized the read, and the pair of initialized reads occurs after the pair is initialized written, so it is possible to conclude that the initialization of the thread a pair configOptions Occurs before thread B is used configOptions . thus configOptions and the variables that can be referenced by it are visible to thread B . (Therefore, and the configOptions variables reachable through it'll be visible to thread B.)

Figure 1. Ensure the visibility of memory writes across threads with synchronization

Data contention

When a variable is read by multiple threads, written by at least one thread, and read and write are not sorted by Hanppens-before relationship, the program is called Data race, and thus is not a "properly synchronized" program.

Does this change the issue of double check locking?

One of the fixes for double-check locking is to make the field containing the lazily initialized instance a volatile field. (See resources for a question about double-check locking and a description of why the proposed algorithm fix does not resolve the issue.) In the old memory model, this does not make double-check locking a thread-safe because the write to the volatile field will still be reordered with the write to the other nonvolatile fields, such as the newly constructed object's fields, so volatile The instance reference may still contain a reference to an object that is not constructed.

In the new memory model, this "fix" for double-check locking makes idiom thread safe. But it still doesn't mean you should use this idiom!. The point of a double-check lock is that it is assumed to be performance-optimized and designed to eliminate the synchronization of common code paths, largely because synchronization is relatively expensive for earlier JDK. Not only is the non-competitive synchronization much cheaper, but the volatile new semantic changes make it much more expensive on some platforms than the old semantics. (In fact, every read or write to a volatile field is like a "half" synchronization-the read of volatile has the same memory semantics as the monitor, and the write to the volatile is the same semantics as the monitor releases.) So if the goal of a double-check lock is to provide better performance than a more intuitive synchronization, the "repaired" version does not help much.

Instead of using a double-check lock, and using the Initialize-on-demand Holder Class idiom, it provides lazy initialization, is thread-safe, and is faster and less confusing than a double-check lock:

Listing 2. Initialize-on-demand Holder Class Idiom

Private Static class Lazysomethingholder {  publicstaticnew  Something ();} ...  Public Static Something getinstance () {  return  lazysomethingholder.something;}

This idiom the fact that a class-initialized operation (such as a static initializer) ensures that all threads that use this class are visible, and that the inner class is not loaded until the thread refers to its field or method, and the fact derives from lazy initialization.

Initialize Security

The new JMM also seeks to provide a new initialization security Guarantee- as long as the object is properly constructed (meaning that a reference to the object is not published until the constructor is complete), and then all threads see the value of the final field set in the constructor. This reference is passed whether or not the synchronization thread is used between threads . Also, all variables that can be made to the final field of an object that is properly constructed, such as the final field of an object referenced with a final field, are guaranteed to be visible to other threads. This means that if the final field contains, say, a reference to one, except that the LinkedList correct value of the reference is visible to other threads, the content at the time of construction is not LinkedList synchronized, and is visible to other threads . The result is a significantly enhanced final meaning-you can safely access this final field without synchronization , and the compiler can assume that the final field will not change and thus optimize multiple fetches.

Final means the ultimate

In part 1th describes a mechanism in which the value of the final field seems to change in the old memory model-when no synchronization is used, the other line routines first sees the default value of the final field and then sees the correct value .

In the new memory model, there is a happens-before-like relationship between the write of the final field of the constructor and the initial load of the shared reference to the object in another thread . When the constructor finishes the task, all writes to the final field (and the variables that are indirectly accessible through these final fields) become "Frozen", and all threads that obtain a reference to the object after the freeze are guaranteed to see the frozen values of all frozen fields. The write of the initialization final field will not be reordered with the action following the freeze associated with the constructor. (When the constructor completes, all of the writes to final fields (and to variables reachable indirectly through those fi NAL fields) become "Frozen," and any of the thread that obtains a reference to this object after the freeze are guaranteed to see The frozen values for all frozen fields. Writes that initialize final fields won't be reordered with operations following the freeze associated with the Constru ctor.)

Conclusion

The JSR 133 significantly enhances volatile semantics so that the volatile flag can be reliably used to indicate that the state of the program has been changed by another thread. As a result of making volatile more "heavyweight", the performance cost of using volatile is closer to the performance cost of synchronization in some cases, but the performance cost remains fairly low on most platforms. The JSR 133 also significantly enhances final the semantics. If a reference to an object is not allowed to escape (escape) during the construction phase, once the constructor completes and the thread publishes a reference to another object, the final field of the object is guaranteed to be visible, correct, and unchanging for all other threads without using synchronization. .

These changes greatly enhance the utility of immutable objects in concurrent programs, and immutable objects eventually become inherently thread-safe (as they want to be), even if the data contention passes the references of immutable objects between threads. (even if a data race is used-pass references to the immutable object between threads.)

One caveat to initializing security is that a reference to an object does not "escape" its constructor--the constructor should not publish a reference directly or indirectly to the object being constructed. This includes not publishing a reference to the nonstatic inner class, and generally avoiding starting the thread in the constructor . For a more detailed description of the security constructs, see resources.

Fix Java memory model, part 2nd--brian Goetz

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.