Jvm basics (7): Java Memory Model and thread, jvmjava
Author | Jing fangge
Address | https://zhuanlan.zhihu.com/p/31582064
Statement | this article was originally published by Jing fangge and has been authorized to publish it. Do not reprint it without the permission of the original author.
Preface
Through the previous study, we have learned how to allocate various memory areas. First, we should know:
This article also answers the following questions:
What optimization policies does Java have when performing operations?
What is the Java memory model?
How does one interact with the active memory?
How are Java threads implemented?
What is the status switching of Java threads?
Efficiency
1. High-speed cache
Because the memory efficiency is relatively low if you operate directly during operations, and the CPU consumes resources, a high-speed cache area is added in the middle as the cache between the memory and the processor. During the operation, you can perform read/write operations in the cache area. After the operation, the results are synchronized to the memory.
2. Optimize execution in disorder
The computer reassembles the execution results in disorder and ensures that the execution results are the same as those in sequence.
Java Memory Model
Memory Model: an abstract process of read/write access to a specific memory or cache under a specific operating protocol
Java Memory Model goals: define access rules for various variables
Variables include instance fields, static fields, and elements that constitute an array object.
Variables are not included: local variables and method parameters (the thread is private and will not be shared)
Primary memory: the values of all variables exist in the primary memory (mainly corresponding to the object instance data section in the "Java Heap)
Working memory: the memory area that the worker thread can directly operate on (mainly corresponding to some areas in the "stack)
Data transmission between threads: must pass through the main memory
The Working Memory takes precedence over the high-speed cache and storage registers, because the program is mainly running in the working memory.
Memory Interaction
Virtual machines provide 8 or less operations with atomicity:
Lock acts on primary memory
Unlock acts on primary memory
Read acts on primary memory
Write acts on the primary memory
Load acts on the working memory
Use acts on the working memory
Role of assign in active memory
Store function main memory
It can be understood as this (assuming that we need to assign variable a to 2 in the program): First, we enter the main memory, lock the object, and then read the object from the main memory, then we load object a into the working memory (read and load must be executed sequentially), and then use object a in the working memory, and get the value of a through the instruction set of object, then, assign a value to assign of object a, store it to the working memory (assign and store must be executed sequentially), and write the object a to the main memory. Finally, the unlocka object is used to facilitate access by other threads,
Operating Principles:
A new variable must be generated from the main memory. That is to say, a variable not initialized (load or assign) cannot be used in the working memory.
A variable can be locked by only one thread at a time. After being locked by the same thread for multiple times, the variable must be unlocked with the same number of unlocks.
If the lock operation is performed on a variable, the value of the variable in the worker thread is cleared.
Before you perform the unlock operation on a variable, you must synchronize the variable to the main memory (that is, after the execution of the store and write operations)
Volatile features
Visibility: Once modified, all threads are visible (common variables need to be known through the main memory)
Security scenarios:
Disable re-sorting Optimization
Command Re-sorting: the CPU allows the development of multiple commands in different sequence and sends them to corresponding circuit units for processing;
Memory barrier: insert memory barrier commands. During re-sorting, subsequent commands cannot be re-ordered to the position before the memory barrier (in fact, the use or assign operations are executed in the order of Code );
Especially when the flag is set in the Code, the execution sequence may be changed due to "machine-level" optimization, and volatile can avoid problems caused by rescheduling;
Atomicity, visibility, and orderliness
1. atomicity
Atomic operations: read, load, use, assign, store, write
Synchronized: lock corresponds to the monitorenter command, and unlock corresponds to the monitorexit command. These two commands correspond to the synchronize keyword. The operation is performed externally to meet the external synchronization requirements.
2. Visibility
After the thread operates in the working memory, it synchronizes data to the main memory. before other threads read data, it refreshes the variable value from the main memory;
Volatile: ensure that the modified value can be synchronized to the primary memory immediately. You must refresh the value from the primary memory before each usage.
Synchronized: Before unlock a variable, it must be synchronized back to the main memory (store and write );
Final: All threads are visible if there is no "escape ".
3. orderliness
Observe within the thread, all operations are ordered, represented as serial
One thread observes another thread, and all operations are out of order, because of the "Command Re-sorting" Phenomenon and the "synchronization from the working memory to the main memory delay" phenomenon.
Volatile: Disable "Command Re-sorting"
Synchronized: Only one thread can lock a variable at a time.
4. FIFO Principle
Definition: If operation A first occurs in operation B, Operation B can observe the impact of Operation;
Judgment basis: the principle of first occurrence is the basis for determining whether the thread is secure.
Inherent principles:
Program Order Principle: operations in a thread follow the control flow sequence of program code
Pipe lock principle: an unlock operation first occurs before the lock of the same lock.
Volatile principle: write operations take place first and then read operations for this variable
Thread startup principle: the Thread object start () first occurs any action of the Thread.
Thread termination principle: all operations are performed first on the termination check of the thread, the join () method ends, and the isAlive () method checks whether the operation has ended.
Thread interruption principle: the interrupt () method first occurs when the interrupt event is detected on the interrupt thread.
Object termination principle: object initialization is completed first in finalize ()
Passability: Operation A is prior to operation B, and Operation B is prior to operation C, so operation A is prior to operation C.
Thread Implementation Method
1. kernel thread implementation
2. User thread implementation
Broadly speaking, LWP also belongs to user threads;
Narrow sense:
Advantage: fast and low consumption
Disadvantages: it is difficult to create, switch, and schedule threads.
3. Mixed implementation
Many UNIX operating systems use
4. Java thread implementation
The JVM thread ing varies depending on the thread model supported by the operating system.
Scheduling Method
Status Conversion
NEW: not started
Runable: It may be executing or waiting for CPU allocation time.
Wating
Wait deadline (Timed Wating): the system automatically wakes up after a certain period of time, such as TheadSleep (1000)
Blocked: Wait for an exclusive lock
Terminated
Summary
Through this study, we have finally deepened our understanding of the Memory Model and thread security. Memory is mainly divided into two categories: work memory and master memory. A thread corresponds to a working memory. After computation is completed in the working memory, data is synchronized to the main memory. It can be concluded that the communication between threads must go through the main memory. Memory interaction is completed through the lock> read> load> use> assign> store> write> unlock operations. Specifically, it is the volatile keyword. First, it inserts the memory barrier command to block the re-sorting optimization function of JVM. You must refresh the latest value in the primary memory before using the variable. Finally, I learned the thread implementation method and the switching relationship between the six states of Java thread New \ Blocked \ Running \ Waiting \ TimedWaiting \ Terminated.
Note: This series extracts the content in deep understanding of Java virtual machines, which simplifies the key points of this book and describes your understanding of this book. I am a beginner. I hope to criticize and advise some mistakes in my article.
Previous
Brief Introduction to jvm basics (5): Virtual Machine class loading mechanism