Come on, get a look. Java memory model (JMM)

Source: Internet
Author: User
Tags abstract definition visibility volatile

There are a lot of articles on the Java memory model on the Web, as well as in the "deep understanding of Java Virtual Machine" and "The Art of Java concurrent Programming" and so on. However, many people do not understand after reading, and even some people say that they are even more ignorant. In this article, the overall introduction of the Java memory model, the purpose is simple, let you read this article, you know exactly what the Java memory model is, why should have the Java memory model, the Java memory model solves what problems and so on.

Why do you have a memory model

Before introducing the Java memory model, let's look at what the computer memory model is, and then see what the Java memory model does on the basis of the computer's memory model. To say a computer's memory model, it is necessary to say an ancient history, to see why there is a memory model.

Memory model, the English name, memory models, he is a very old antique. He is a concept related to computer hardware. So I'm going to introduce you to what he has to do with the hardware.

CPU and cache Consistency

We should all know that when the computer executes the program, each instruction is executed in the CPU, and when it executes, it is unavoidable to deal with the data. And the data above the computer is stored in main memory, that is, the computer's physical memory.

At first, it was peaceful, but with the development of CPU technology, the CPU was executing more and more quickly. And because the memory technology does not change much, so the process of reading and writing data from memory and the CPU execution speed is more and more big, which causes the CPU to consume a lot of waiting time for each memory operation.

It's like a start-up company, where the founders and employees work happily, but as the founders ' abilities and ambitions grow, and the gap between the employees and the staff, the average employee has never been able to keep up with the CEO's footsteps. Every command of the boss, to the grassroots staff, because the basic staff of the understanding of the lack of ability to execute, it will take a lot of time. This also virtually slows down the efficiency of the entire company.

However, not because the memory read and write slow, do not develop CPU technology, it can not let memory become the bottleneck of computer processing it.

So, one of the best ways to do this is to increase the cache between CPU and memory. The concept of caching everyone knows, is to save a copy of the data. He is characterized by fast speed, low memory and expensive.

Then, the process of executing the program becomes:

When the program is running, the data required by the operation is copied from main memory to the CPU cache, then the CPU can read and write data directly from its cache, and when the operation is finished, the data in the cache is flushed to main memory.

After that, the company began to set up middle managers, who were directly headed by the CEO, led by instructions, told managers directly, and then could do their own thing. Managers are responsible for coordinating the work of the underlying staff. Because managers are knowledgeable about their people and what they are responsible for. So, most of the time, the company's various decisions, notifications, and so on, the CEO as long as the communication between the management staff is enough.

And with the increase of CPU capacity, a layer of cache is slowly unable to meet the requirements, gradually derived from multi-level cache.

Depending on the data read order and the degree of integration with the CPU, the CPU cache can be divided into primary cache (L1), Level two cache (L3), some high-end CPUs also have a level three cache (L3), and all the data stored in each level cache is part of the next level of caching.

The technical difficulty and manufacturing cost of these three caches are relatively decreasing, so their capacity is also relatively incremental.

Then, with the multi-level cache, the execution of the program becomes:

When the CPU is going to read a data, it is first looked up from the primary cache, if it is not found and then looked up from the level two cache, if it is not in the level three cache or in memory.

As the company grows bigger, the boss wants to manage more and more things, the company's management department began to reform, began to appear high-level, middle, bottom and other managers. Level-by-layer management between levels.

A single-core CPU contains only a set of L1,L2,L3 caches, and if the CPU contains multiple cores, the multicore CPUs, each core contains a set of L1 (or even L2) caches, while sharing L3 (or L2) caches.

The company is divided into many kinds, some companies have only one big boss, he is the boss of a person. But some companies have mechanisms such as co-general managers and partners.

Single core CPU just like a company with only one boss, all orders come from him, then just a set of management team is enough.

Multicore CPUs as a company is co-founded by multiple partners, it is necessary for each partner to set up a set of senior managers for their own direct leadership, multiple partners sharing the company's bottom-level employees.

There are companies that continue to grow and start differentiating out of each subsidiary. Each subsidiary is more than one CPU, and no resources are shared with each other. Non-impact.

is a single-CPU dual-core cache structure.


As computer capabilities continue to improve, multithreading is beginning to be supported. Then the problem comes. We analyze the effects of single-threaded, multi-threaded CPUs on single-core CPUs and multiple cores separately.

Single Thread. The CPU core cache is accessed only by one thread. Cache exclusive, there is no access violation, and so on.

Single core CPU, multi-threaded. Multiple threads in a process access shared data in the process at the same time, and when the CPU loads a chunk of memory into the cache, the different threads map to the same cache location when they access the same physical address, so that the cache does not fail even if the thread switches. However, because only one thread is executing at any time, there is no cache access violation.

Multi-core CPU, multi-threaded. Each core has at least one L1 cache. When multiple threads access a shared memory in the process, and the threads execute on separate cores, each core retains a buffer of shared memory in its own caehe. Because multicore can be parallel, multiple threads may be able to write their own caches at the same time, and the data between the respective caches may be different.

Increased cache between CPU and main memory, there may be a cache consistency problem in multi-threaded scenarios, that is, in the multi-core CPU, each core's own cache, about the same data cache content may be inconsistent.

If this company's orders are issued serially, then there is no problem.

If the company's orders are issued in parallel, and these orders are issued by the same CEO, this mechanism is not a problem. Because his command executor had only a set of management systems.

If the company's orders were issued in parallel, and the orders were issued by multiple partners, there was a problem. Because each partner will only give orders to their immediate managers, and the management of multiple managers of the bottom-level employees may be public.

For example, the partner 1 to dismiss employee A, partner 2 to give employees a promotion, after the promotion of the words he will be dismissed need multiple partners meeting resolution. Two partners issued their orders to their own managers respectively. Partner 1 After the order was issued, manager a dismissed the employee and he knew the employee had been expelled. and 2 of the management of the partner 2 at this time before the news, but also that employee a is in service, he was pleased to accept the partner to his promotion of a command.

Processor Optimization and command rearrangement

The above mentioned increased cache between CPU and main memory, there is a cache consistency problem in multi-threaded scenarios. In addition to this situation, there is also a hardware problem is more important. That is, in order to make the operating unit within the processor as full as possible, the processor may perform the processing of the input code in a disorderly order. This is processor optimization.

In addition to the many popular processors that are now optimizing the code for sequencing, many programming language compilers have similar optimizations, such as the instant compiler (JIT) of the Java Virtual machine, which also commands rearrangement.

It can be imagined that if the processor is optimized and the compiler will rearrange the instructions, it may cause a variety of problems.

With regard to the adjustment of employees ' organization, if the personnel department is allowed to arbitrarily split or rearrange the order after receiving multiple orders, the impact on the employee and the company is very great.

Problems with concurrent programming

The hardware-related concepts you may have heard a little bit, and do not know what he has to do with software. But you should know something about concurrent programming, such as atomicity, visibility and order.

In fact, the problem of atomicity, visibility and order. is the abstract definition of people. The underlying problem of this abstraction is the previously mentioned cache consistency problem, processor optimization problem, and instruction rearrangement problem.

Here is a brief review of these three questions, not ready for further development, interested readers can learn by themselves. We say that concurrent programming, in order to ensure data security, needs to meet the following three features:

Atomicity means that in one operation the CPU is not allowed to pause and then dispatch, neither interrupted nor executed.

Visibility means that when multiple threads access the same variable, a thread modifies the value of the variable, and other threads can immediately see the modified value.

Ordering is the order in which the program executes according to the order of the Code.

There is no discovery that the cache consistency issue is actually a visibility issue. and processor optimization can lead to atomicity problems. The order rearrangement causes the problem of ordering. So, instead of mentioning the concepts at the hardware level, we will use the familiar atomicity, visibility, and ordering directly.

What is a memory model

As mentioned earlier, the problem of cache consistency, processor-optimized instruction rearrangement is the result of escalating hardware. So, is there any mechanism that can solve these problems well?

The simplest and most straightforward approach is to eliminate processor and processor optimization techniques, and to eliminate CPU caches, allowing the CPU to interact directly with main memory. However, this can guarantee concurrency problems under multithreading. But that's a bit unworthy.

Therefore, in order to ensure that the concurrent programming can meet the atomicity, visibility and order. There is an important concept, that is--the memory model.

In order to ensure the correctness of shared memory (visibility, ordering, atomicity), the memory model defines the specification of the read and write operation behavior of multi-thread in shared memory system. These rules are adopted to standardize the read and write operation of memory, so as to ensure the correctness of instruction execution. It is processor related, cache-related, concurrency-related, and compiler-related. He solves the memory access problems caused by CPU multilevel cache, processor optimization, instruction rearrangement and so on, which guarantees the consistency, atomicity and order of concurrency scenarios.

The memory model solves concurrency problems in two main ways: limiting processor optimizations and using memory barriers. This article does not go into the underlying principle to expand the introduction, interested friends can learn by themselves.

What is the Java memory model

The computer memory model is described earlier, which is an important specification for solving concurrency problems in multi-threaded scenarios. So what is the implementation of the specific, different programming languages, in the implementation may be different.

We know that Java programs need to run on the Java Virtual machine, Java memory model (JMM) is a compliant memory model specification, shielding the various hardware and operating system access differences, To ensure that the Java program in the various platforms to access the internal storage can ensure consistent effect of the mechanism and norms.

Referring to the Java memory model, generally refers to the new memory model that JDK 5 starts with, mainly described by the JSR-133:JAVATM memory model and Thread specification. Interested can refer to this PDF document (http://www.cs.umd.edu/~pugh/java/memoryModel/jsr133.pdf)

The Java memory model specifies that all variables are stored in main memory, that each thread has its own working memory, that the thread's working memory holds the primary memory copy of the variable used in the thread, and that all the operations of the thread on the variable must be made in working memory and not directly read and write to the main memory. There is no direct access to the variables in the other's working memory between different threads, and the transfer of variables between threads requires the synchronization of data between their working memory and main storage.

The JMM is used to synchronize the data between the working and main memory. He rules out how to do data synchronization and when to do data synchronization.


This refers to the main memory and working memory, the reader can be simple analogy to the computer memory model of primary and cache concepts. It is important to note that the main memory and working memory are not the same level of memory partitioning as the Java heap, stack, and method area in the JVM memory structure, and cannot be directly analogous. In-depth understanding of Java virtual machines, if it is necessary to reluctantly correspond, from the definition of variables, main memory, working memory, main memory mainly corresponds to the object instance data part of the Java heap. The working memory corresponds to some areas of the virtual machine stack.

So, to summarize, JMM is a specification that solves the problems caused by the inconsistency of local memory data in the presence of multithreading through shared memory, the reordering of code instructions by the compiler, and the execution of code by the processor.

Implementation of the Java memory model

Understanding Java Multi-Threading friends know that in Java provides a series of related keywords and concurrency, such as volatile, synchronized, final, Concurren package, etc. In fact, these are some of the keywords that the Java memory model encapsulates for the programmer when the underlying implementation has been implemented.

In the development of multithreaded code, we can directly use synchronized and other keywords to control concurrency, never need to care about the underlying compiler optimization, cache consistency and other issues. Therefore, the Java memory model, in addition to defining a set of specifications, also provides a series of primitives, encapsulated the underlying implementation, for developers to use directly.

This article is not intended to introduce all of the keywords to their usage, because there is a lot of information on the web for the use of each key word. Readers can learn by themselves. A key point to be introduced in this article is that, as we mentioned earlier, concurrent programming solves the problem of atomicity, ordering, and consistency, and we'll look at what is used in Java, respectively, to ensure it.

Atomic Nature

In Java, two advanced bytecode Directives monitorenter and monitorexit are provided to ensure atomicity. In the synchronized implementation principle article, introduced, this two byte code, in Java the corresponding keyword is synchronized.

Therefore, synchronized can be used in Java to ensure that the operations within the method and the code block are atomic.

Visibility of

The Java memory model is implemented by synchronizing the new value back to main memory after the variable has been modified, and by flushing the variable value from the main memory before the variable is read as a transitive medium.

The volatile keyword in Java provides a feature that the modified variable can be immediately synchronized to the main memory after it has been altered, and the variable it modifies is refreshed from the main memory each time it is used. Therefore, you can use volatile to guarantee the visibility of variables during multithreaded operations.

In addition to the synchronized and final two keywords in volatile,java, visibility can be achieved. It's just a different way of doing it, no longer here.

Order of

In Java, you can use synchronized and volatile to guarantee the ordering of operations between multiple threads. There are different ways to achieve this:

The volatile keyword prevents the command from being re-queued. The Synchronized keyword guarantees that only one thread is allowed to operate at the same time.

Well, here's a simple introduction to the keywords that can be used in Java concurrency programming to solve atomicity, visibility, and ordering. The reader may find, as if the Synchronized keyword is omnipotent, he can meet the above three characteristics at the same time, this is actually a lot of people abuse synchronized reason.

However, synchronized is more performance-impacting, although the compiler provides a number of lock optimization techniques, but it is not recommended for overuse.

Summarize

After reading this article, I believe you should know what is the Java memory model, the role of the Java memory model, and what the memory model in Java has done. With regard to these and memory-model-related keywords in Java, I hope that readers can continue to learn more, and write a few examples of their own experience.

The original disappears ER
Original link: http://www.hollischuang.com/archives/2550

Recently in the Java background, you can learn.

Persistence: Learning the first stage of Java background, I learned the knowledge
Share my three summary of learning Java background
Java concurrent interview, fortunately a little daoxing, otherwise be fooled
(Android) Face question level Answer (selected edition)

Come on, get a look. Java memory model (JMM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.