Java concurrent Programming 12.java memory model

Source: Internet
Author: User
Tags semaphore volatile

Suppose a thread assigns a value to a variable: variable = 3;

The memory model needs to fix a problem: "Under what conditions, the thread reading variable will see this value as 3?" ”

This may seem natural, but if there is a lack of synchronization, there are many factors that make it impossible for a thread to see the results of another thread immediately or forever.

Such as:

1. The order of instructions generated in the compiler can be different from the order in the source code, and the compiler will save the variables in registers instead of memory;

2. The processor can execute instructions in a disorderly or parallel manner;

3. The cache may change the order in which the write variables are submitted to the main memory;

4. And the values stored in the processor's local cache are not visible to other processors.

These factors make it impossible for one thread to see the latest value of a variable and cause memory operations in other threads to appear to be executing in a disorderly order.

The Java language Specification requires a serial-like semantics to be maintained in the JVM thread: All of the above operations are allowed as long as the final result of the program is the same as the result executed in a strict serial environment.

This is really a good thing, because the computer's performance gains in recent years have been largely attributed to these reordering measures.

In a single-threaded environment, we cannot see all of these underlying technologies, which have no effect other than increasing the execution speed of the program.

In a multithreaded environment, maintaining the serialization of a program can result in significant performance overhead. For threads in concurrent applications, they perform their own tasks for most of the time, so coordinated operations between threads can only slow down the application without any benefit. Only when multiple threads are sharing data must the actions between them be reconciled, and the JVM relies on a synchronization operation to find out when these reconcile operations will occur.

The JVM sets a minimum set of guarantees that specifies when write operations to variables will be visible to other threads. The JVM is designed with a tradeoff between predictability and ease of application of the program, enabling a high-performance JVM to be implemented on a variety of mainstream processor architectures.

The memory model of the platform

In a multiprocessor architecture with shared memory, each processor has its own cache and is regularly reconciled to the main memory.

Different levels of cache consistency are provided in different processor architectures, some of which provide minimal assurance that different processors are allowed to see different values from the same storage location at any point in time.

It will take a lot of overhead to make sure that every processor knows what other processors are doing at any point in time. This information is unnecessary for most of the time, so the processor will appropriately loosen the storage consistency guarantee in exchange for performance gains.

The memory model defined in the schema tells the application what guarantees it can get from the memory system, in addition to defining special instructions (called Memory fences) that enable additional storage coordination guarantees when data is shared. In order for Java developers not to care about the differences between memory models on different architectures, Java also provides its own memory model, and the JVM masks the difference between the JVM and the underlying platform memory model by plugging in the memory fence at the appropriate location.

Suppose: Imagine that there is only a unique sequence of operations executed in the program, regardless of the processor on which the operation is performed, and that each time the variable is read, the value of the variable that was last written to it in the execution sequence (any processor) is obtained.

This optimistic model is known as serial consistency, and developers often err on the assumption that there is serial consistency, but this serial consistency is not provided in any modern multiprocessor architecture, as does the JVM.

in multiprocessor and compilers that support shared memory, there are some strange situations when sharing data across threads, unless the memory fence is used to prevent these situations from occurring. in a Java program, however, you do not need to specify the location of the memory fence, but only use synchronization correctly to find out when the shared state will be accessed.

Re-order/**
* @author83921
* In the absence of proper synchronization, it is difficult to infer the behavior of the simplest concurrent program. In the example:
* It's easy to imagine how the output (1,0), (0,1), or (T1) can be done before T2 starts, and T2 can be done before the T1 starts, or they are alternately executed.
* but also output (0,0), because there is no data flow dependency between the actions in each thread, these operations can be executed in order, even if the operations are executed sequentially, but
* This may also occur in different timing of cache flushing to main memory, where the assignment of T1 may be performed in reverse order in the T2 perspective.
* You can imagine the order of execution in T2 [X=b, B=1, Y=a, A=1]
* It is very difficult to enumerate all possible execution results for this simple example, and the reordering of memory levels makes the program's behavior unpredictable.
* While it is easy to make sure that synchronization is used correctly in the program, synchronization restricts the way that the compiler, the runtime, and the hardware reorder memory operations so that they do not break the visibility guarantees provided by the JVM when reordering.
*/
Publicclassdemo{

Static intx = 0, y = 0;
StaticintA = 0, b = 0;

PublicStaticvoidMain (string[] args)throwsinterruptedexception {

Thread T1 =NewThread (NewRunnable () {
PublicvoidRun () {
A = 1;
x = b;
}
});

Thread T2 =NewThread (NewRunnable () {
PublicvoidRun () {
b = 1;
y = A;
}
});

T1.start ();
T2.start ();

T1.join ();
T2.join ();

SYSTEM.OUT.PRINTLN (x + "--" +y);
}
} Java Memory model

The Java memory model is defined by various operations, including read/write operations on variables, locking and deallocation of monitors, and thread initiation and merging operations.

The JVM defines a partial-order relationship for all operations in the program, called Happens-before. To ensure that the thread executing action B sees the result of operation a (whether a and B are executed in the same thread), the happens-before relationship must be met between A and B. If this relationship is absent, then the JVM can reorder them arbitrarily.

When a variable is read by multiple threads and written by at least one thread, there is a data contention problem if the read and write operations are not sorted according to Happens-before. There is no competition for data in properly synchronized programs, and it shows serial consistency, meaning that all operations in the program are performed in a fixed and global order.

Happens-before Rules:

1. Program Sequence rules: If the program is in action a before Operation B, then the thread in action a will be executed before Operation B.

2. Monitor lock rule: The unlock operation on the monitor lock must be performed before the lock operation on the same monitor lock.

3.volatile variable rule: A write operation on a volatile variable must be performed before the read operation on the variable.

4. Thread initiation rule: The call to Thread.Start on the thread must be executed before any action is taken on the threads.

5. Thread end rule: Any action in a thread must be executed before another thread detects that the thread has ended, or it returns successfully in Thread.Join, or False when thread.isalive is called.

6. Terminal rules: When a thread calls interrupt on another thread, it must execute by throwing (interruptexception, before the interrupt call is detected by the terminal thread) or call isinterrupted and interrupted. )

7. Finalizer rule: The constructor of the object must be completed before the finalizer of the object is started.

8. Transitivity: If operation A is performed before Operation B and Action B is performed before Operation C, then operation a must be performed before Operation C.

Although these operations only satisfy the partial order relationship, synchronous operations, such as the acquisition and release of Locks, and the read and write operations of volatile variables, all satisfy the full order relationship.

Therefore, when describing happens-before relationships, you can use the term "subsequent lock acquisition operations" and "subsequent read operations of volatile variables".

When two threads synchronize with the same lock, there is a happens-before relationship between them.

All operations within thread A are sorted according to their order in the source program, as is the case inside thread B. In a The lock M was released and b subsequently acquired the lock m, so all operations before releasing the lock in A, before all operations in B after the request lock. If two threads are synchronized on different locks, then the sequence of actions between them cannot be inferred because there is no happens-before relationship between the two threads.

With synchronization

The Happens-before program order rule is combined with some other order rule (usually a monitor lock rule or a volatile variable rule) to sort the access operations of an unlocked variable.

How to use this "help" technique is illustrated in the Futuretask protection method Abstractqueuedsynchronizer.

AQS maintains an integer that represents the state of the Synchronizer, and Futuretask uses this integer to hold the state of the task. But Futuretask also maintains a number of other variables, such as calculation results.

When a thread calls set to save the result and another thread calls get to get the result, the two threads are best sorted by Happens-before. This can be done by declaring a reference to the execution result as a volatile type, but using the existing synchronization mechanism makes it easier to implement the same functionality.

ImportJava.util.concurrent.CancellationException;
ImportJava.util.concurrent.ExecutionException;
ImportJava.util.concurrent.locks.AbstractQueuedSynchronizer;

/**
* @author83921
* Futuretask is designed to ensure that tryreleaseshared is always successfully invoked before calling tryacquireshared.
* Tryreleaseshared will write a variable of type volatile, and tryacquireshared will read the variable.
* The Innerset and Innerget methods are called when the result is saved and fetched.
* Because Innerset will write result before calling releaseshared (this will call tryreleaseshared again),
* and Innerget will read result after calling acquireshared (this will call tryacquireshared again),
* Therefore, you can ensure that the Innerset write operation is performed before the read operation in Innerget.
*/
Publicclassfuturetask<v>{

PrivateFinal classSyncextendsabstractqueuedsynchronizer{

PrivateStaticFinalintRUNNING = 1;

PrivateStatic FinalintRAN = 2;

PrivateStaticFinalintCANCELLED = 4;

PrivateV result;

PrivateException Exception;

voidInnerset (v V) {
while(true){
ints = getState ();
if(ranorcancelled (s)) {
return;
}
if(Compareandsetstate (S,ran)) {
Break;
}
}
result = V;
releaseshared (0);
Done ();
}

V Innerget ()throwsInterruptedexception, executionexception{
acquiresharedinterruptibly (0);
if(getState () = = CANCELLED) {
ThrowNewCancellationexception ();
}
if(Exception! =NULL){
ThrowNewExecutionexception (Exception);
}
returnResult
}
}
}

The other happens-before that are provided in the class library are sorted as:

1. Placing an element in a thread-safe container executes before another thread obtains the element from the container.

2. The countdown operation on the Countdownlatch will be executed before the thread is returned from the await method on the latch.

3. The release semaphore license operation will be performed before obtaining a license from the semaphore.

All operations of the task represented by 4.Future will be executed before returning from Future.get.

5. An operation to submit a runnable or callable to Exceutor will be performed before the task begins.

6. A thread that arrives at Cyclicbarrier or exchanger will be executed before other threads that arrive at the fence or switch point are freed. If the cyclicbarrier uses a fence operation, the operation that arrives at the fence is performed before the fence operation, and the fence operation is executed before the thread is freed from the fence.

To publish an unsafe publication:

When a happens-before relationship is missing, a reordering problem may occur, which can explain why publishing an object without full synchronization causes another thread to see an object that is only partially constructed.

When initializing a new object, you need to write more than one variable, that is, each field in the new object. Similarly, when you publish a reference, you also need to write a variable, which is a reference to the new object.

If you cannot ensure that the operation that publishes the share reference executes before another line loads the shared reference, the write operation to the new object reference is reordered with the write operations of the individual fields in the object (from the perspective of the thread that uses the object).

In this case, another thread might see the most recent value of the object reference. However, it is also possible to see that some or all states of an object contain an invalid value, which is a partially constructed object.

/**
* @author83921
* The problem in the program seems to be only a race condition problem (can be ignored when all resource examples are the same)
* Even if you do not consider this issue, this release is still unsafe because another thread might see a reference to a partially constructed resource instance.
*
* Assuming that T1 is the first thread to call getinstance, it will see resource as NULL, initialize a new resource, and set resource to this new instance.
* When T2 subsequently calls GetInstance, it may see that the resource value is non-null, so use this already constructed resource.
* However, there is no Happens-before method between the T1 write resource operation and the T2 read resource operation.
*
* When a new resource is assigned, the resource constructor will modify each field in the new instance from the default to the initial value.
* Because the two threads do not use synchronization, the order in which T2 see T1 may be different from the order in which T1 performs these operations.
*thatT2 may see writes to resource occur before a write operation is resource for each domain. thus T2 sees a resource instance that is partially constructed in an invalid state.
*/
Public classresource{

PrivateStaticResource Resource;

PublicStaticResource getinstance () {
if(Resource = =NULL){
Resource =NewResource ();
}
returnResource
}
}

In addition to immutable objects, objects that are initialized with another thread are generally unsafe unless the object's publish operation is performed before the thread that uses the object begins to use it.

Security release: In the example above you need to change getinstance to synchronized, and use synchronization to resolve the issue.

The JVM takes a special approach in the initializer to handle the static domain (or the value initialized in the static initialization code block) and provides additional thread security assurances.

A static initializer is performed by the JVM during the initialization phase of the class, after the class is loaded and before the thread is used.

Because the JVM acquires a lock during initialization, and each thread acquires the lock only once to ensure that the class has been loaded, write operations for memory are automatically visible to all threads during static initialization.

Therefore, statically initialized objects do not need to display synchronization, either during construction or when they are referenced.

However, this rule applies only to the state at the time of construction, and if the object is mutable, it is still necessary to synchronize between the read thread and the write thread to ensure that subsequent modifications are visible and to avoid data corruption.

By combining this characteristic of static initializers with the lazy loading mechanism of the JVM, a delay initialization technique can be formed.

/**
* @author83921
* Deferred initialization of placeholder mode: Initializes the resource with a specialized class.
* The JVM will defer initialization of Resourceholder until the class is started, and the resource is initialized with a static initialization, so no additional synchronization is required.
* When any thread calls GetResource for the first time, it causes Resourceholder to be loaded and initialized, at which point the static initializer will perform the resource initialization operation.
*/
Public classresourcefactory{

PrivateStaticclassresourceholder{
PublicStaticResource Resource =NewResource ();
}

PublicStaticResource GetResource () {
returnResourceholder.resource;
}
}

#笔记内容来自 "java concurrency Programming practice"

Java concurrent Programming 12.java memory model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.