Java memory model and thread (GO) good

Source: Internet
Author: User
Tags visibility

Java Memory models and threads

Reference

Http://baike.baidu.com/view/8657411.htm

Http://developer.51cto.com/art/201309/410971_all.htm

Http://www.cnblogs.com/skywang12345/p/3447546.html

The computer's CPU computing power is very strong, its computing speed and memory and other storage and communication subsystem speed compared to a few orders of magnitude,

After the data is loaded into memory, CPU processor processing is spent most of the time waiting to get to the data returned by disk IO, network traffic, and database access.

Why a Java memory model is required

The purpose is only one: make full use of the computer's various computing, storage, communication ability, let TA do more things for mankind!
~CPU usage rate is more than 90%!

Speaking from the physical computer:
"Do more things"-"Let the computer do a few things at the same time, the concurrent execution of a number of computational tasks!"
Why can it be concurrent?
The CPU is processed several times more than the memory, and the speed of the disk and the network is even worse.
The processor CPU of the computer is calculated, a large part of the time spent from memory or disk, the network to take the actual number!
This wait time can handle other tasks, now the computer and operating system are multi-tasking system!

Contradiction: CPU processing speed faster than access to memory 3 orders of magnitude above!!!

What to do: the introduction of cache, near the CPU, placed a little CPU cache, the frequently used data from memory to the cache, so that the CPU as much as possible to tell the operation and reduce the wait, the completion of the operation of the results back into memory!

Cache-based architecture balances the speed gap between CPU and memory

The modern multiprocessor general abstract architecture is as follows:

, the exchange of data between multiple processors passes through the public memory area,
This is a pattern of concurrent inter-entity communication when concurrent processing: based on shared memory
There is also a message-based (semaphore? Of

New problem: If a variable in the public memory area (shared variable) is concurrently manipulated by multiple processors (RW), there is a problem with data inconsistency:
When fetching data, it is possible to take the data out of date, when the data is stored back, whichever processor cache the data in the end!!!

In addition, the CPU will perform random execution optimizations on the input instructions, so the order in which the program code appears is not necessarily the order of execution!

Workaround: Cache Conformance Protocol (each hardware platform architecture has its own implementation: MSI, MESI, MOSI, Dragon protocal, etc.)

Since the JVM is a virtual computer, it should be able to handle tasks concurrently!

So there is the Jmm,java memory model is to block out the various hardware and operating system memory access differences, the implementation of the Java language in each platform can achieve efficient, correct, consistent concurrent processing.
Analogous to the multi-processor (multicore) memory model, the abstract diagram for JMM is as follows:

Analogy:

Multi-core Machine JMM Memory
A processor A thread OS-level threads (lightweight processes)
Cache Working memory JVM stacks, registers, caches
Cache Consistency Protocol Multithreading synchronization Rules OS-level control
Memory Main memory Java heap, physical memory

Java memory model is defined as a set of variables in the JVM between the working memory and the main memory of the interactive operation and operation rules!
Variables: instance fields, static fields, members that make up an array object

JMM divides the memory used by multithreading into shared main memory and thread-private working memory, and provides:
All variables are stored in the main memory area, and the birth and death of the variables are in main memory
Single-threaded storage requires the use of a copy of the main memory variable into its working memory, and must not directly manipulate the read and write variables in the main memory
Threads can only see the variables in their working memory, and the data exchange between threads must pass through main memory-the way that shared memory!

8 Kinds of variables in JMM operation and rules

There are 8 types of interaction between working memory and main memory (JSR-133):

Operation Variable of action Effect
Lock Main memory Identify variables as exclusive to each thread
Unlock Main memory Release a lock on a thread
Read Main memory Transferring the value of a variable to the working memory of the thread
Load Working memory Save the transferred variable as a local copy.
Use Working memory Passing local variable values to the execution engine
Assign Working memory Writes the value of the execution engine to a local variable
Store Working memory Transfer local variable values to main memory
Write Main memory Writes the passed variable value to the main memory variable with the same name.


The interaction rules attached to these 8 operations (to be memorized):

    • Both Read&load and Store&write must appear at the same time, but there can be other operations between the two operations, which are not allowed!
    • If a thread has a assign operation, then the store and write operations must be followed, which cannot occur. Changing variables must be stored back
    • The new variable can only be born out of the main memory, use and store variables, must have the corresponding load or assign the operation of the variable
    • A variable can be locked by a thread at most at the same time, and it will be able to lock the variable multiple times, and perform the same number of unlock to fully unlock it.
    • Lock variable, the value of a copy of the variable in the working memory is emptied, and a reload or assign is required before the execution engine is used
    • A variable without lock does not allow unlock
    • Must be stored back to main memory, i.e. store and write before executing unlock
Special volatile type variable

Variables can be modified with volatile such as public static void int race = 0;

So what is the semantics of volatile?

    • The most lightweight synchronization guarantee in the JVM
    • Ensure that this variable is visible to all threads and that the thread is immediately aware of the change
    • Prevents the JVM from reordering instructions for this variable (withinthread as-if-serial semantics line range is displayed as serial)

Parsing: Volatile modified variables, the JVM must first brush the current value from main memory before using use , and if there is a assign operation must immediately execute the store and write operations, immediately back to the main memory, Ensure that other threads can see changes to the variables of the current thread, and that the JVM inserts a memory masking instruction (Fence memories Barrier) to ensure that the variable is assigned in the same order as the program input, i.e. it will not be re-ordered for optimization.

The use and assign of normal variables do not have the "every" and "immediate" constraints, so it can be confusing!

Volatile is often less expensive than synchronized, the following scenarios recommend using volatile, the rest use synchronized and other protection:

    • Variables are only modified by a single thread and read by other threads;
    • The result of the operation does not depend on the current value of the variable
    • Variables do not need to participate in invariant constraints with other state variables?
Three features of the Java memory model

The above-mentioned operation is in fact the jmm around the concurrent processing of the three basic protection points:

Atomic Nature

What is atomicity: Refers to an operation, or a series of operations, either all executed or not executed at all!
Atomicity is to ensure that you will get the initial value of this variable, or the value that a thread has written to the variable completely , and not the result value that two or more threads write after the variable is written at the same time ( that is, atomicity can be ensured, All bit bits corresponding to the obtained result value are all written by a single thread )
Reads and writes of basic data types in the JVM are atomic
A wider range of atomicity guarantees the use of the Synchronized keyword package

Memory visibility

Visibility refers to modifying the value of a local copy of a shared variable in one thread, and other threads can immediately know the change! Assign after active immediately store and Write,use must read and load before
Three ways to implement: volatile synchronized (unlock must be store-write) final

Order of

All operations within a single thread are ordered: Within-thread as-if-serial semantics

While looking at another thread from one thread (though it is not directly aware), all operations are unordered: instruction Reflow and working memory Master memory Synchronization delay

Volatile and synchronized (a variable allows only one thread lock at a time, so that multiple synchronization blocks with lock on the same variable can only be entered serially) are guaranteed.

The principle of antecedent occurrence Happen-before

In addition to the volatile,synchronized final three keywords to guarantee the above three, the JVM also provides rules to ensure that the default, otherwise there are three keywords everywhere.

These default rules, called the Happen-before principle, are the primary basis for judging whether the data is competitive and whether the thread is safe or not!

What happens first:the consequences of a operation (modifying the value of a variable, sending a message, invoking a method) can be perceived by the B operation! It's almost irrelevant to time .

Antecedent awareness, first known, pre-occurrence, inconsistent with the principle of the directive is likely to be the JVM to perform the rearrangement optimization.

The main principle of prior perception:

    • Procedural Order Rules Program Order: Within the same thread, by code, and Code control flow order, written in front of the written in the
    • Tube Lock Rule Monitor Lock: a lock operation on the same lock after a unlock precedes its time
    • Volatile rule: Write to a volatile variable precedes the read operation on his back
    • Thread Start rule: Start () of the thread object precedes other actions of this object!
    • Thread termination rule: Other actions of thread are prior to terminating detection of this thread. i.e. prior to Thread.Join () |isalive ()
    • Thread Break rule: The call to Thread interrupt () precedes the detection of an interrupt event on the interrupted thread, which is prior to thread.interrupted ()
    • Object Finalization rule: Initialization of an object (constructor complete) precedes its finalize () method
    • Transitivity: A precedes b,b in C, then a precedes C

There is not much relationship between the order of time and the principle of the occurrence of Mr., the measure of concurrency security is based on the principle of first occurrence!

Java Thread Implementation Java threading

A more lightweight scheduling execution unit than a process

Java is native declared by the method is platform-related, but also the most efficient

Thread implementation to see who is in charge of the thread's dispatch switch:
Operating system kernel-level multi-threaded dispatch support-lightweight process (kernel thread) 1:1, one process: one thread
User-state control thread, 1:N, one process: multiple threads
User thread + lightweight process, can m:n, many-to-many

The Windows and Linux editions of the Sun JDK are platform-dependent, OS-level, one-to-one lightweight process models.

Concurrency performance: single-process multithreaded JVM multi-process single-threaded PHP multi-process multithreaded PowerPC

Thread scheduling

Dispatch: The system assigns the process of the processor to the thread of use!

Synergy: Cooperative

The thread itself controls the execution time, after the completion of the task, to actively notify the system to switch to another thread up, unstable!

Preemptive type: preemptive

OS to allocate thread execution time and switch, execution time is relatively controllable. The Java thread is so dispatched that the priority of the thread is guaranteed by the thread priority of the OS.

Thread state transitions

Java defines 5 states of threads: New runable waiting, timed waiting, blocked,terminated

http://my.oschina.net/mingdongcheng/blog/139263

http://my.oschina.net/jingxing05/blog/275334

Java memory model and thread (GO) good

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.