Java Multi-Threading and concurrency model lock

Source: Internet
Author: User
Tags array length cas mutex object object

This is a long article summarizing the development of Java multithreading. This paper introduces the Synchronized keyword introduced from the beginning of Java creation, and discusses the Java multithreading and concurrency model. It is hoped that the interpretation of this content will help Java developers to better understand the context of Java concurrency programming.

The internet is filled with the introduction of Java multithreaded programming, each article from a different perspective and summarizes the content of the field. But most of the articles do not explain the nature of the implementation of multithreading, not to make developers really "enjoyable."

This article begins with the introduction of the Java thread Safety originator's built-in lock, which gives you an idea of the implementation logic and principles of the built-in lock and the performance issues that arise, and then explains that the existence of locks in Java multithreaded programming is intended to guarantee the thread-safe use of shared variables. Let's get down to the chase.

The following content, without special instructions, refers to the Java environment.

The first part: Lock

Referring to concurrent programming, the first response of most Java engineers is the Synchronized keyword. This is the product of Java in the 1.0 era, still used in many projects, with the Java version of the update has been in existence for more than 20 years. Within such a long life cycle, synchronized is also carrying out "self" evolution.

The early synchronized keyword was the only solution to Java concurrency problems, and with the introduction of this "heavyweight" lock, the performance overhead was also significant, and early engineers came up with a number of solutions (such as DCL) to improve performance in order to solve the performance overhead problem. Fortunately, Java1.6 provides a status upgrade of the lock to address this performance drain. Generally speaking, Java locks according to categories can be divided into class lock and object lock two, two kinds of lock is not affected, the following we look at the specific meaning of the two locks.

Class locks and Object locks

The instance objects in the JVM heap and class variables that are saved in the method area are required because the JVM memory objects need to work together on both resources to ensure thread safety. Therefore, the built-in lock of Java is divided into class lock and object lock implementation. As mentioned earlier, class locks and object locks are two types of locks that are isolated from each other, and they do not have a direct effect on each other and implement thread-safe access to shared objects in different ways. The following are described according to the two types of lock isolation methods:

1, when there are two (or more) threads together to access an object shared objects, only one thread at a time can access the object's synchronized (this) synchronization method (or synchronous code block), that is, at the same moment, only one thread can get the CPU execution, The other thread must wait for the thread that is currently getting the CPU to finish before it has a chance to acquire the lock on the shared object.

2, when a thread has obtained the Execute permission of the object's synchronous method (or synchronous code block), other threads can still access the object's non-synchronized method.

3. When a thread has acquired a lock on the synchronized (this) synchronous method (or block) of the object object, it can still be acquired by other threads during the same CPU cycle, and the two locks do not have a resource race condition.

After we have a basic understanding of the categories of built-in locks, we may wonder how the JVM implements and saves the state of the built-in locks, in fact, the JVM is storing the lock information in the object header of the Java object. First, let's look at what's going on with the Java object header.

Java Object Header

In order to solve the lock performance cost of the early synchronized keyword, the lock state upgrade mode has been introduced from Java1.6 to reduce the performance consumption of the 1.0-era lock, and the lock of the object is upgraded by the lock-like, lightweight lock

Figure 1.1: Object Header

In the hotspot virtual machine, the object header is divided into two parts (the array is also used to store the array length), some of which are used to store runtime data, such as Hashcode, GC generational information, lock flag bits, which are also known as Mark Word. During a virtual machine run, the JVM reuses mark Word's storage interval in order to save storage costs, so mark Word's information changes as the lock state changes. Another part of the data type pointer for the method area is stored.

The status upgrade implementation of the built-in lock for Java is implemented by replacing the mark Word's identity in the object header, and the following is a detailed look at how the state of the built-in lock is upgraded from a lock-free state to a heavyweight lock state.

Status upgrade for built-in locks

In order to improve the performance of the lock, the JVM provides a total of four levels of locks. Levels are divided from low to high: stateless locks, biased locks, lightweight locks, and heavyweight locks. In Java applications, locks are mostly used for object locks, and object locks may eventually escalate to heavyweight locks as the threads compete. Locks can be upgraded but not degraded (that is, why we need to preheat the data to prevent noise interference and, of course, other reasons for any benchmark test). Before describing the built-in lock status upgrade, first introduce an important lock concept, spin lock.

Spin lock

The performance degradation caused by the built-in lock in the mutex state is obvious. A thread that does not get a lock needs to wait for the thread that holds the lock to release the lock before it can scramble to run, and the operation to suspend and resume a thread needs to go from the operating system's user state to the kernel state. However, the CPU to ensure that each thread can be run, the allotted time slice is limited, each context switch is a very wasteful time slice of the CPU, in this condition the spin lock play an advantage.

The so-called spin is to let the thread without the lock run itself for a period of time, thread spin will not cause the thread to hibernate (spin will always consume CPU resources), so it is not really blocking. When the thread state is changed by another thread, it enters the critical section and is then blocked. This setting is turned on by default in the Java1.6 version (can be turned on by the JVM parameter-xx:+usespinning, the parameters of the spin lock in Java1.7 have been canceled, the user configuration is no longer supported, but the virtual machine always executes by default).

Although the spin lock does not cause the thread to hibernate, the wait time is reduced, but the spin lock also has a waste of CPU resources, the spin lock needs to run idle CPU resources during operation. Only makes sense if the spin wait time is higher than the synchronous block. So the JVM limits the time limit of spin, and when this limit is exceeded, the thread is suspended.

An adaptive spin lock is provided in the Java1.6, which optimizes the number of times of the original spin lock limit and is determined by the spin thread time and lock state. For example, if a thread has just succeeded in acquiring a lock, the next time a lock is acquired, the likelihood of acquiring it is very large, so the JVM is allowed to spin for a relatively long time, whereas the spin time is very short or the spin process is ignored, which is also optimized in Java1.7.

Spin locks are always used throughout the built-in lock state, as a complement to the bias lock, lightweight lock and heavyweight lock.

Biased lock

The key idea of a lock optimization mechanism proposed by Java1.6 is that if the current thread is not competing, the thread synchronization operation that has acquired the lock is canceled, and the lock detection is reduced in the JVM's virtual machine model. That is, if a thread obtains a biased lock on the object, it does not require additional synchronization when the thread requests the biased lock.

The specific implementation is that when a thread accesses a synchronization block, the biased thread ID of the lock is stored in the object header's Mark Word, and the subsequent thread accesses the lock, it is easy to check if Mark Word is biased and whether it is biased toward the current thread.

If the test succeeds, the thread acquires a biased lock, and if the test fails, it is necessary to detect if the mark in Mark word is biased in favor of the lock (Mark bit is 1). If not set, the CAS competition lock is used. If set, an attempt is made to use CAs to point to the current thread the mark Word bias lock tag for the object header. You can also use the JVM parameter-xx:-usebiastedlocking parameter to disable biased locking.

Because a biased lock uses a mechanism that has a competition to release the lock, the thread that holds the biased lock releases the lock when other threads try to compete for a biased lock.

A lightweight lock

If the bias lock acquisition fails, the JVM attempts to use a lightweight lock to bring up a lock upgrade. The starting point of the existence of lightweight locks is to optimize the acquisition of locks, without the existence of multi-threaded competition, in order to reduce the Java 1.0 ERA lock mutex performance overhead. Lightweight locks are implemented inside the JVM using the Basicobjectlock object.

The specific implementation of the current thread before entering the synchronization code block, the Basicobjectlock object will be placed in the Java stack frame, the object's interior is composed of the Basiclock object and the Java object pointer. The current thread then attempts to use the CAs to replace the Mark Word lock tag in the object header to point to the lock record pointer. If successful, gets the lock, changing the lock tag of the object to 00 | Locked, if the failure indicates that there are other threads competing, the current thread attempts to acquire the lock using spin.

When there are two (or more) threads competing for a lock, the lightweight lock will no longer work, the JVM will inflate it to a heavyweight lock, and the lock will be changed to 10 | Monitor.

The lightweight lock is also manipulated by the replacement object header of the CAs when unlocking. If successful, it means that the lock was successfully acquired. If it fails, the object has other threads competing, and the lock expands to a heavyweight lock.

Heavy-weight locks

After a lightweight lock acquisition fails, the JVM uses a heavyweight lock to handle the synchronization operation, when the object's Mark Word is marked as 10 | Monitor, in the dispatch of the heavyweight lock processing thread, the blocked thread will be suspended by the system, and after the threads get the CPU resources again, it is necessary to switch the system context to get the CPU execution, at which time the efficiency will be much lower.

We learned from the above introduction to Java's built-in lock escalation strategy, as the lock of each upgrade brings performance degradation, so we should try to avoid the acquisition of locks in the program design, you can use centralized caching to solve the problem.

An episode: Inheritance of Built-in locks

Built-in locks can be inherited, Java's built-in lock in the subclass of the parent class synchronization method to cover the method, its synchronization flag can be used in the quilt class inheritance, we look at the following example:

PublicClassParent {PublicSynchronizedvoidDoSomething() {System.out.println ("Parent do something"); }} Java Learning Group 669823128public class child extends the   Parent {public  synchronized void dosomething() {. D Osomething (); Public  static void main(string[] args) {new Child (). dosomething ();}} 

Code Listing 1.1: Built-in lock inheritance

Can the above code run properly?

The answer is yes.

Avoid risk of activity

The security and activity of Java concurrency are mutually influential, and we need to avoid the risk of thread activity while locking is used to secure threads. Java threads cannot automatically troubleshoot and recover from deadlocks like databases. And the deadlock check in the program is sometimes not obvious, must reach the corresponding concurrency state to occur, this problem often brings disastrous results for the application, here are the following several kinds of activity risk: deadlock, thread starvation, weak response, live lock.

Dead lock

When a thread always takes possession of a lock, and the other thread tries to fetch the lock, the thread will be permanently blocked.

A classic example is the AB lock problem, thread 1 acquires the lock of the shared data A, and thread 2 acquires the lock of the shared data B, at which time thread 1 wants to fetch the lock of the shared data B, and thread 2 acquires the lock of the shared data A. This would be a loop if represented by the graph's relationship. This is the simplest form of deadlock. Also, for example, when we update the batch-disordered data, the problem will be raised if the disorderly behavior raises the 2-thread resource scramble, and the solution is to sort it before processing it.

Thread Starvation

Thread starvation is a thread starvation that occurs when a thread accesses the resources it needs, but is permanently denied so that it can no longer proceed to the next process, such as a thread's contention for CPU time slices, inappropriate references to low-priority threads in Java, and so on. Although thread prioritization is defined in the Java API, this is only a self-recommended behavior to the CPU (note here that the thread priorities of different operating systems are not uniform and the corresponding Java thread priorities are not uniform). However, this does not guarantee that high-priority threads will be able to be executed by the CPU first.

Weak responsiveness

In the GUI program, we generally see the client program is run in the background, the form of front-end feedback, when the CPU-intensive background tasks and foreground tasks together to compete for resources, it is possible to cause the front-end GUI freeze effect, so we can reduce the priority of the background program, as far as possible to ensure the best user experience.

Live lock

Another manifestation of thread activity failure is that the thread is not blocked, but cannot continue because the same operation is repeatedly retried, but always fails.

The risk of thread activity is a behavior that we should avoid in development. This behavior can cause catastrophic consequences for your application. Summarize

All the content about the Synchronized keyword has been introduced here, and in this chapter I hope to make you understand that the lock is "heavy" because of the escalation of competition between threads. In real development we may have other options, such as the lock interface, where performance is better than the built-in lock implementation in some concurrency scenarios.

Whether through the built-in lock or through the lock interface is to ensure the concurrency of security, concurrency environment generally need to consider how to protect the shared object security access. The second chapter details the thread-safety issues raised by the built-in objects and the solution.

Java Learning Group 669823128

Java Multi-Threading and concurrency model lock

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.