Java memory model and Java Threading Implementation principles

Source: Internet
Author: User

The efficiency and consistency of the hardware
Cache-based storage interaction is a good solution to the speed contradiction between processor and memory, but it also brings a higher degree of complexity to the computer system because it introduces a new problem: Cache consistency .
in multiprocessor systems, each processor has its own cache, and they share the same main memory as shown in. When the operations of multiple processors involve the same piece of area in main memory, when the data in the cache is synchronized back to main memory, whose cache data will be the answer? In order to ensure the consistency of data, it is necessary for each processor to access the cache to follow some protocol, that is, cache consistency protocol. Java memory ModelThe Java Virtual Machine specification attempts to define a Java memory model that masks memory access differences between various hardware and operating systems to enable Java programs to achieve consistent memory access across a variety of platforms. main memory vs. working memoryThe main goal of the Java memory model is to define the access rules for each variable in the program, that is, the underlying details of storing variables in the virtual machine to memory and removing variables from memory. The variables here differ from those described in the Java programming language, which include instance fields, static fields, and elements that make up the array object, but not local variables and method parameters, because the latter is thread-private and will not be shared, and there will naturally be no competition issues. in order to achieve better performance, the Java memory model does not restrict the execution engine from using the processor's specific registers or caches to interact with the main memory, nor does it limit the optimizations that the immediate compiler can use to adjust the order of code execution. The Java memory model specifies that all variables are stored in main memory, where the main memory is only part of the virtual machine memory, and the virtual machine memory is only part of the physical memory of the computer (the part allocated for the virtual machine process). Each thread also has its own working memory, and the thread's working memory holds a copy of the main memory copies of the variables used by the thread. All operations of a thread on a variable (read, assign) must be made in working memory and not directly read and written to variables in main memory. There is no direct access to variables in each other's working memory between threads, and the transfer of variable values between threads needs to be done through main memory, the interaction between threads, main memory, and working memory, such as:
Note: The main memory, working memory and Java heap, stack, method area in the Java memory area are not the same level of memory partition. Main memory primarily corresponds to the object instance data portion of the Java heap, while working memory corresponds to some of the data in the virtual machine stack. At a lower level, the main memory directly corresponds to the memory of the physical hardware. in order to get a better run, the virtual machine may let the working memory first be stored in registers and caches, because the program is running with primary access to read and write working memory. Interactive operation between memorywith regard to the specific interaction protocol between main memory and working memory, that is, how a variable is copied from main memory to working memory, how to synchronize back to main memory from the working memory, the Java memory Model defines 8 operations to complete, and the virtual machine implementation must ensure that each of the operations mentioned below is atomic. Lock : A variable that acts on the main memory, which flags a variable as a thread-exclusive state. Unlock (Unlocked): A variable that acts on the main memory, releasing a variable that is in a locked state, and the released variable can be locked by another thread. Read : A variable that acts on the main memory, transferring the value of a variable from main memory to the working memory of the thread for subsequent load actions to use. load: A variable that acts on working memory, which places the value of a read operation from the main memory into a variable copy of the working memory. use: A variable acting on a working memory that passes the value of a variable in the working memory to the execution engine. Assign (Assignment): A variable that acts on the working memory and assigns to the working memory a value that is accepted from the execution engine. Store : A variable acting on a working memory that transfers the value of a variable in the working memory to the main memory for subsequent write operations. Write : A variable that acts on the main memory, which puts the value of the store operation from the main memory into the variable of the main memory.
In addition to ensuring the atomicity of the 8 operations above, the Java memory model also specifies the specifications that must be followed in performing the 8 basic operations described above, thus fully determining that the memory access operations in the Java program are secure concurrently. Java and Threadingimplementation of Threadswe know that the thread is more lightweight than the process of scheduling execution unit, the introduction of a thread, can be a process of resource allocation and execution of scheduling, the individual threads can share process resources (memory address, file I/O, etc.), and can be scheduled independently (thread is the basic unit of CPU scheduling). mainstream operating systems provide threading implementations, while the Java language provides a unified approach to threading operations on different hardware and operating system platforms. All of the key methods in the thread class are declared as native, and in the Java API, local methods often mean that the method is not used or is not platform-independent (and of course it is possible to use the native method for efficiency). because of this, this section is titled "Thread Implementation" instead of "Java Threading Implementation"There are 3 main ways to implement threads: using kernel threading implementations, using user thread implementations, and using user-line loads lightweight process blending implementations. 1. Implement with kernel threadThe kernel thread (kernel-level thread,klt) is a thread that is supported directly by the operating system kernel, which completes the thread switchover by the kernel, which dispatches the thread through the operating system scheduler and is responsible for mapping the thread's tasks to each processor. Each kernel thread can be considered a single clone of the kernel, so that the operating system has the ability to handle multiple things simultaneously, and a multi-threaded kernel is called a multithreaded kernel. programs typically do not go directly to kernel threads, but instead use a high-level interface of kernel threads- lightweight Processes (light Weight process,lwp), since each lightweight process is supported by one kernel, so only kernel threads are supported first, To have a lightweight process. The 1:1 relationship between this lightweight process and kernel threads is called a pair of thread models.

because of the support of kernel threads, each lightweight process becomes a separate dispatch unit, even though a lightweight process is blocked in the system call and does not affect the entire process to continue working, but the lightweight process has its limitations. First, because it is based on kernel threads, various threading operations, such as creation, destruction, and synchronization, require system calls. System calls are relatively expensive and need to be switched back and forth between the user state and the kernel state. Second, each lightweight process requires support from a kernel thread, so a lightweight process consumes a certain amount of kernel resources (such as the stack space of kernel threads), so the number of lightweight processes that a system supports is limited. 2, using the user thread implementationbroadly speaking, a thread can be considered a user thread as long as it is not a kernel thread, so, from this definition, the lightweight process is also a user thread, but the implementation of the lightweight process is always built on the kernel, and many operations make system calls, and the efficiency is constrained. In the narrow sense, the user thread refers to the line libraries which is completely built on the user space, and the system kernel cannot perceive the implementation of thread existence. User threads are created, synchronized, destroyed, and dispatched completely in the user state without the need for kernel help. This thread does not need to switch to the kernel state, so the operation can be very fast and very low-consumption, also can support a larger number of threads, some high-performance database multithreading is implemented by the user thread. The 1:n relationship between this process and the user thread is called a one-to-many threading model.

the advantage of using a user thread is that no system kernel support is required. The disadvantage is that there is no system kernel support, and all thread operations require the user program to handle it themselves. thread creation, switching, and scheduling are all issues to consider, and since the operating system allocates processor resources only to processes, problems such as "How to handle blocking," "How to map threads to other processors in a multiprocessor system" can be extremely difficult or impossible to complete. As a result, programs that are implemented using user threads are generally extremely complex, with fewer programs that use user threads , except for specific environments. 3, using the user line loads lightweight process hybrid implementationin this mixed mode, where there is a user thread, there is also a lightweight process. User threads are still fully built into user space, so the creation, switching, and destruction of user threads are still inexpensive and can support large-scale user-thread concurrency. The lightweight process supported by the operating system serves as a bridge between the user thread and the kernel thread, so that the thread scheduling and processor mappings provided by the kernel thread can be used, and the user thread's system calls are done through a lightweight process, greatly reducing the risk of the entire process being completely blocked. In this hybrid mode, the number of user threads versus lightweight processes is variable, that is, the n:m relationship, also known as many-to-many threading models.

implementation of Java threadsJava threads are implemented based on user threads prior to JDK 1.2, whereas in JDK 1.2, the threading model is replaced by the OS - native thread. therefore, in the current JDK version, the operating system to support the threading model, to a large extent determines how the Java virtual machine threads are mapped, this is not agreed on different platforms, the virtual machine specification does not qualify the Java thread needs to use which threading model to implement. The threading model only has an impact on the concurrency and operating costs of threads, and these differences are transparent to the coding and operation of Java programs. for Sun JDK, both the Windows version and the Linux version are implemented using a one-on-one threading model, and a Java thread is mapped to a lightweight process, because the threading model provided by Windows and Linux systems is single. java thread SchedulingJava thread Scheduling is preemptive scheduling, although the scheduling of Java threads is automatic, but we can "recommend" the system to some threads to allocate a little more execution time, and some other threads can be allocated less than one o'clock-this operation can be done by setting the priority. However, the priority of threads is not too reliable, because Java threads are mapped to native threads, so thread scheduling ultimately depends on the operating system, although many operating systems now offer priority concepts, but do not necessarily correspond to priority one by one for Java threads. For example, there are only 7 thread priorities in Windows, while the Java language sets a total of 10 levels of thread priority.
State Conversion of Java ThreadsNEW: A thread that has not been started since it was created is in this state. Run (runable): Runable includes running and ready in the operating system thread state, that is, a thread in this state may be executing, or it may be waiting for the CPU to allocate execution time to it. wait indefinitely (waiting): Threads in this state are not allocated CPU execution time, they wait to be woken by other threads. deadline Wait (Timed waiting): Threads in this state are not allocated CPU execution time, but do not wait to be woken by other threads, and they wake up automatically after a certain amount of time. blocking (Blocked): A thread is blocked, the difference between a "blocking state" and a "wait state" is that the "blocking state" is waiting for an exclusive lock to occur when another thread discards the lock. End (Terminate): The thread state of the thread that has been terminated, the thread has finished executing.




Java memory model and Java Threading Implementation principles

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.