1. The Java memory Model Java Virtual Machine specification attempts to define a Java memory model (MODEL,JMM) that masks memory access differences between various hardware and operating systems to enable Java programs to achieve consistent memory access across various platforms.
The main goal of the Java memory model is to define the access rules for each variable in the program, that is, the underlying details of storing variables in the virtual machine into memory and removing variables from memory.
① main memory and working memory the Java memory model specifies that all variables are stored in main memory (main memory) (where the primary is the same as the main memory name when the physical hardware is introduced, the two can be compared to each other, but this is only part of the virtual machine memory). Each thread also has its own working memory (the working memory, which can be compared to the previous processor cache analogy), where the thread's working memory holds a copy of the master memory of the variable used by the thread, and all operations of the thread on the variable (read, assign, etc.) must be in working memory. You cannot read and write directly to the variables in main memory.
The following 8 actions are defined in the ② memory interaction Java memory model, and the virtual machine implementation must ensure that each of the operations mentioned below is atomic and non-divided (for variables of double and long, load, store, Read and write operations allow exceptions on some platforms)
Lock: A variable that acts on the main memory, which identifies a variable as a thread-exclusive state. Unlock (Unlocked): A variable that acts on the main memory, releasing a variable that is in a locked state, and the released variable can be locked by another thread. READ: A variable that acts on the main memory, transferring the value of a variable from main memory to the working memory of the thread for subsequent load actions to use. Load: A variable that acts on working memory, which places the value of a read operation from the main memory into a variable copy of the working memory. Use: A variable acting on a working memory that passes the value of a variable in the working memory to the execution engine, which is performed whenever the virtual opportunity is to a bytecode instruction that needs to use the value of the variable. Assign (Assignment): A variable that acts on the working memory, assigning a variable that is received from the execution engine to the working memory, and performs this operation whenever the virtual opportunity is assigned to a byte-code instruction that assigns a value to the variable. Store: A variable acting on a working memory that transfers the value of a variable in the working memory to the main memory for subsequent write operations. Write: A variable that acts on the main memory, which puts the value of a store operation's variable from the working memory into a variable in the main memory. Operating rules:
- Does not allow one of the read and load, store, and write operations to appear separately
- A thread is not allowed to discard its most recent assign operation, that is, the variable must be synchronized back to main memory after it has changed in working memory.
- A thread is not allowed to synchronize data from the working memory of the thread back to main memory for no reason (no assign operation has occurred).
- A new variable can only be "born" in main memory and not allow direct use of a variable that is not initialized (load or assign) in the working memory, in other words, the Assign and load operations must be performed before a variable is implemented as a use, store operation.
- A variable allows only one thread to lock it at the same time, but the lock operation can be repeated multiple times by the same thread, and the variable will be unlocked only after performing the same number of unlock operations, after performing the lock multiple times.
- If you perform a lock operation on a variable, it will empty the work as the value of this variable in memory, the value of the initialization variable of the load or assign operation needs to be re-executed before the execution engine uses this variable.
- If a variable is not locked by the lock operation beforehand, it is not allowed to perform a unlock operation on it, nor is it allowed to unlock a variable that is locked by another thread.
- Before performing a unlock operation on a variable, you must first synchronize this variable back into main memory (execute store, write operation).
Special rules for ③volatile variables when a variable is defined as volatile, it has two characteristics,
- The first is to ensure that this variable is visible to all threads, where "visibility" means that when a thread modifies the value of the variable, the new value is immediately known to other threads.
since volatile variables can only guarantee visibility, we still have to use locking (atomic classes in synchronized or java.util.concurrent) to guarantee atomicity in an operation scenario that does not conform to the following two rules. The
result of the operation does not depend on the current value of the variable, or can ensure that only a single thread modifies the value of the variable.
variables do not need to participate in invariant constraints with other state variables.
- the second semantics of using volatile variables is to prohibit command reordering optimizations, and ordinary variables only guarantee that the correct results will be obtained in all areas that depend on the results of the execution of the method, without guaranteeing that the order of the variable assignment is consistent with the order of execution in the program code.
④, visibility and ordering of atoms
- Atomicity (atomicity): Atomic variable operations that are directly guaranteed by the Java memory model include read, load, assign, use, store, and write.
- Visibility (Visibility): visibility means that when a thread modifies the value of a shared variable, other threads can immediately know the change.
volatile guarantees the visibility of variables in multi-threaded operations, which is not guaranteed by ordinary variables. In addition to volatile, Java has two keywords to achieve visibility, namely synchronized and final.
- Order (Ordering): The natural ordering of Java programs can be summed up in one sentence: If you observe in this thread, all operations are orderly, and if you look at another thread within a thread, all operations are unordered. The first half of the sentence refers to "line range expression as a serial semantics" (Within-thread as-if-serial Semantics), the latter sentence refers to the "order reordering" phenomenon and "working memory and main memory synchronization delay" phenomenon.
⑤ principle The following are some of the "natural" antecedent relationships under the Java memory model, which are already present without any Synchronizer assistance and can be used directly in the encoding. If the relationships between the two operations are not there and cannot be deduced from the following rules, they are not guaranteed to be sequential, and the virtual machines can reorder them arbitrarily.
- Procedural Order rule (Program Order rule): in a Yes, because the impact of thread C on the variable I may be observed by thread B, or not, thread B has the risk of reading to outdated data, and does not have multithreading security.
- Tube lock rule: A unlock operation first occurs after the lock operation facing the same lock. The same lock must be emphasized here, and the "back" refers to the chronological order of the time.
- Volatile variable rules (volatile Variable rule): The write operation of a volatile variable precedes the read operation of the variable, and the "back" here also refers to the order of time.
- Thread Start rule: the Start () method of the thread object takes precedence over each action of this thread.
- thread Termination rule: All operations in a thread occur first in the termination detection of this thread, and we can end with the Thread.Join () method, The return value of thread.isalive () detects that the thread has terminated execution.
- thread interrupt rule (thread interruption rule): The invocation of the thread interrupt () method takes precedence The interrupt event is detected by the code of the interrupted thread and can be detected by the thread.interrupted () method.
- Object Finalization rule (Finalizer rule): The initialization of an object (the end of the execution of a constructor) occurs first at the beginning of its finalize () method.
- Transitivity (transitivity): If operation a first occurs in operation B, Operation B first occurs in Operation C, it can be concluded that operation a precedes operation C.
2. There are 3 ways Java threads Implement threads: implementation using kernel threads, implementation using user threads, and loads lightweight process blending with user lines.
①using kernel thread implementations
Because of kernel threading support, each lightweight process becomes a separate dispatch unit, even though a lightweight process is blocked in the system call and does not affect the entire process to continue working, but the lightweight process has its limitations: first, because it is based on kernel threading, various threading operations, such as creating, Both the destructor and the synchronization require a system call. System calls are relatively expensive and need to be switched back and forth between the user mode and the kernel state (Kernel mode). Second, each lightweight process needs to have a kernel thread to support it, so a lightweight process consumes a certain amount of kernel resources (such as the stack space of kernel threads), so the number of lightweight processes that a system supports is limited.
Iiusing the user thread implementation
The user thread in the narrow sense refers to the line libraries which is completely built on the user space, and the system kernel cannot perceive the implementation of thread existence. The creation, synchronization, destruction, and scheduling of user threads are completely done in the user state without the need for kernel help. If the program is implemented properly, this thread does not need to switch to the kernel state, so the operation can be very fast and low-consumption, can also support a larger number of threads, some high-performance database multithreading is implemented by the user thread. (most languages are not used)
③using the user line loads lightweight process hybrid implementation
3, thread dispatching thread scheduling refers to the process of assigning a processor to a thread, the main scheduling methods are two kinds, namely, cooperative thread scheduling (cooperative threads-scheduling) and preemptive thread scheduling (preemptive threads-scheduling).
- If you use a multi-threaded system with coordinated scheduling, the execution time of the thread is controlled by the thread itself, and after the thread has executed its own work, the active notification system switches to another thread. The biggest benefit of collaborative multithreading is that it is simple to implement, and because threads are going to do their own thing after the thread switch, the switch operation is known to the thread itself, so there is no thread synchronization problem.
- If a multi-threaded system with preemptive scheduling is used, then each thread will be assigned execution time by the system, and the thread's switchover is not determined by the thread itself (in Java, Thread.yield () can yield execution time, but the thread itself has no way of getting execution time). In this way of implementing thread scheduling, the execution time of threads is system controllable, and there is no problem that one thread will cause the whole process to block, and the thread scheduling method used by Java is preemptive scheduling.
4. Thread state Transitions the Java language defines 5 thread states, at any point in time, a thread can have only one of these states, and the 5 states are as follows. ① NEW: A thread that has not been started since it was created is in this state. ② Run (runable): Runable includes running and ready in the operating system thread state, that is, a thread in this state may be executing, or it may be waiting for the CPU to allocate execution time to it. ③ waits indefinitely (waiting): Threads in this state are not allocated CPU execution time, and they wait for the Cheng to wake up from another thread. The following method causes the thread to fall into an infinite wait state: There is no object.wait () method to set the timeout parameter. The Thread.Join () method of the timeout parameter is not set. Locksupport.park () method. ④ Wait (Timed waiting): Threads in this state will not be allocated CPU execution time, but they will be automatically awakened by the system after a certain amount of time without waiting for the other lines to wake up Cheng. The following method causes the thread to enter a time-limited wait state: the Thread.Sleep () method. The Object.wait () method of the timeout parameter is set. The Thread.Join () method of the timeout parameter is set. Locksupport.parknanos () method. Locksupport.parkuntil () method. ⑤ blocking (Blocked): The thread is blocked, the difference between "blocking state" and "waiting state" is that "blocking state" waits to acquire an exclusive lock, which occurs when another thread discards the lock, while the "waiting state" is waiting for a period of time, or a wake-up action occurs. The thread will enter this state when the program waits to enter the synchronization area. ⑥ End (Terminated): The thread state of the thread has been terminated and the thread has finished executing.
From for notes (Wiz)
011 Java memory model and threading