Java multithreading and concurrency model lock, java multithreading Model

Source: Internet
Author: User

Java multithreading and concurrency model lock, java multithreading Model

This is a long article that summarizes Java multi-threaded development. This article introduces the synchronized keyword from the beginning of Java, and discusses Java multithreading and concurrency models. It is hoped that the interpretation of this article will help Java developers better clarify the context of Java concurrent programming.

The Internet is full of Introduction to Java multi-threaded programming. Every article introduces and summarizes the content of this field from different perspectives. However, most articles do not describe the implementation nature of multithreading, and do not really make developers "addicted ".

This article starts with introducing the built-in locks of Java thread security, so that you can understand the implementation logic and Principles of the built-in locks and the performance problems caused, then, it is explained that the lock exists in Java multi-threaded programming to ensure the secure use of shared variables. Let's go to the topic.

Unless otherwise stated, the following content refers to the Java environment.

Part 1: Lock

When it comes to concurrent programming, the first response of most Java engineers is the synchronized keyword. This is the product of Java in the 1.0 era and is still applied to many projects so far. With the Java version update, it has existed for more than 20 years. Within such a long life cycle, synchronized is also undergoing "self" evolution.

Early synchronized keywords were the only solution to Java concurrency issues. With the introduction of this "weighted" lock, the performance overhead was also very high. In order to solve the performance overhead problem, early engineers, come up with a lot of solutions (such as DCL) to improve performance. Fortunately, Java1.6 provides a lock status upgrade to solve this performance consumption. Generally speaking, Java locks can be divided into class locks and Object locks by category. The two locks do not affect each other, let's take a look at the specific meanings of the two locks.

Class lock and object lock

Because two types of resources need to be coordinated in the JVM memory object to ensure thread security, the instance objects in the JVM heap and class variables in the method area are saved. Therefore, Java built-in locks are implemented in two ways: Class locks and Object locks. As mentioned above, class locks and Object locks are mutually isolated. They do not directly affect each other and implement thread-safe access to shared objects in different ways. The following describes the two lock isolation methods:

1. When two or more threads access an Object shared Object together, only one thread can access the synchronized (this) of the Object at the same time) synchronous method (or synchronous code block), that is, at the same time, only one thread can get CPU execution, the other thread must wait until the thread for CPU execution is completed to obtain the lock of the shared object.

2. When a thread has obtained the execution permission of the Object's synchronous method (or synchronous code block), other threads can still access the non-synchronized Method of the Object.

3. When a thread has obtained the synchronized (this) synchronization method (or code block) lock of the Object, the Object is synchronized by the class lock modifier (or code block) it can still be obtained by other threads within the same CPU cycle. The two locks do not compete with resources.

After we have a basic understanding of the built-in lock category, we may wonder how the JVM implements and saves the built-in lock status, in fact, JVM stores the lock information in the object header of the Java object. First, let's take a look at the Java object header.

Java object header

In order to solve the lock performance overhead problem caused by the early synchronized keyword, the lock status upgrade method was introduced from Java1.6 to reduce the performance consumption caused by the locks in the 1.0 era, the lock of an object is upgraded from the unlocked status to the biased lock-> lightweight lock-> heavyweight lock.

Figure 1.1: Object Header

In a Hotspot virtual machine, the object header is divided into two parts (an array is also used to store an array length). One part is used to store data during running hours, such as HashCode, GC generation information, and lock flag, this part is also called Mark Word. During the running of a VM, JVM will reuse the storage interval of Mark Word to save storage costs. Therefore, the information of Mark Word changes with the lock status. The other part is used for data type pointer storage in the method area.

The status upgrade of the built-in locks in Java is implemented by replacing the Mark Word Mark in the object header, next, let's take a look at how the built-in lock status is upgraded from the lockless status to the heavyweight lock status.

Status upgrade of built-in locks

JVM provides four types of locks to improve the lock performance. Levels from low to high: stateless locks, biased locks, lightweight locks, and heavyweight locks. In Java applications, most locks use Object locks. As thread competition increases, Object locks may eventually be upgraded to heavyweight locks. Locks can be upgraded but cannot be downgraded (that is, why we need to push data for any benchmark test to prevent noise interference, and of course noise may be another cause ). Before explaining the upgrade of the built-in lock status, we will introduce an important lock concept, spin lock.

Spin lock

The performance degradation caused by built-in locks in mutex is obvious. A thread without a lock needs to wait for the lock to be released before it can compete for running. The operations to suspend and restore a thread must be switched from the user State of the operating system to the kernel state. However, to ensure that each thread can run, the allocated time slice is limited. Each context switch is a waste of CPU time slice. In this case, the spin lock plays an advantage.

The so-called spin is to let the threads that do not get the lock run for a period of time, the thread spin will not cause the thread to sleep (the spin will always occupy CPU resources), so it is not really blocking. When the thread status is changed by other threads, it enters the critical section and is blocked. This setting has been enabled by default in Java 1.6 (you can enable it by JVM parameter-XX: + UseSpinning. in Java 1.7, the spin lock parameter has been canceled, it does not support user configuration, but is always executed by default by virtual machines ).

Although the spin lock does not cause thread sleep and reduces the waiting time, the spin lock also wastes CPU resources. The spin lock requires idling of CPU resources during running. It makes sense only when the spin wait time is higher than the synchronization blocking time. Therefore, JVM limits the time limit of spin. When this limit is exceeded, the thread will be suspended.

The adaptive spin lock is provided in Java1.6 to optimize the number of times of the original spin lock limit, which is determined by the spin thread time and lock status. For example, if a thread just spin to successfully obtain the lock, the next time it gets the lock, it is very likely that the JVM will allow the spin for a relatively long time. Otherwise, the spin time is very short or the spin process is ignored. This situation is also optimized in Java 1.7.

The spin lock is always used throughout the built-in lock status as a supplement to biased locks, lightweight locks, and heavyweight locks.

Biased lock

The biased lock is a lock optimization mechanism proposed by Java1.6. Its core idea is that if the current thread does not compete, the synchronization operation of the previously acquired lock will be canceled, reduce lock detection in the JVM Virtual Machine Model. That is to say, if a thread gets the object's biased lock, when this thread requests this biased lock, no additional synchronization operation is required.

The specific implementation is that when a thread accesses a synchronization block, it will store the biased thread ID in the Mark Word of the object header. When the thread subsequently accesses the lock, you can simply check whether Mark Word is a biased lock and whether the biased lock points to the current thread.

If the test succeeds, the thread obtains the biased lock. If the test fails, check whether the biased lock Mark in Mark Word is set to a biased state (marked with 1 ). If not set, the CAS contention lock is used. If so, try to use CAS to direct the Mark Word in the object header to the current thread. You can also use the JVM parameter-XX:-UseBiastedLocking to disable the biased lock.

Because the lock-biased mechanism is released only when competition exists, when other threads attempt to compete in the lock-biased mechanism, the lock will be released only when the thread holding the lock-biased mechanism is used.

Lightweight lock

If the attempt to obtain the preferred lock fails, the JVM will attempt to use the lightweight lock, resulting in a lock upgrade. The starting point of the existence of lightweight locks is to optimize the lock acquisition method, so as to reduce the performance overhead caused by lock mutex in the Java 1.0 era without multi-thread competition. The lightweight lock is implemented using the BasicObjectLock object in JVM.

The specific implementation is that the current thread will put the BasicObjectLock object into the Java stack wait before entering the synchronous code block, this object is composed of the BasicLock object and the Java object pointer. Then the current thread tries to use CAS to replace the Mark Word lock Mark in the object header to point to the lock record pointer. If the lock is obtained successfully, the lock mark of the object is changed to 00 | locked. If the lock fails, other threads compete. The current thread uses spin to try to obtain the lock.

When two (or more) threads compete for a lock, the lightweight lock will no longer function, and the JVM will expand it into a heavyweight lock, the lock's bid is also changed to 10 | monitor.

When a lightweight lock is unlocked, it is also operated through the replacement object header of CAS. If successful, the lock is obtained successfully. If the lock fails, it indicates that the object is competing with other threads, and the lock will become a heavyweight lock as it expands.

Heavyweight lock

After the JVM fails to obtain the lightweight lock, it uses the heavyweight lock to process the synchronization operation. In this case, the Mark Word of the object is marked as 10 | monitor. During the scheduling of the heavyweight lock processing thread, the blocked thread is suspended by the system. After the thread obtains CPU resources again, it needs to switch the context of the system to get the CPU execution. The efficiency will be much lower.

Through the above introduction, we learned about the built-in lock upgrade policy of Java. As the performance of each lock upgrade degrades, we should avoid the acquisition of the lock whenever possible during program design, you can use centralized cache to solve this problem.

An episode: inheritance of built-in locks

Built-in locks can be inherited. Java built-in locks can be used by the quilt class to inherit the parent class when subclass override the method of the parent class synchronous method, let's look at the following example:

Public class Parent {public synchronized void doSomething () {System. out. println ("parent do something");} java Learning Group 669823128 public class Child extends Parent {public synchronized void doSomething (){. doSomething ();} public static void main (String [] args) {new Child (). doSomething ();}}

Code 1.1: built-in lock inheritance

Can the above Code run normally?

The answer is yes.

Avoid Risks of activity

Java concurrency security and activity affect each other. We use locks to ensure thread security while avoiding the risk of thread activity. Java threads cannot automatically troubleshoot and remove deadlocks like databases, nor can they be recovered from deadlocks. In addition, the deadlock check in the program is sometimes not obvious, and it must arrive at the corresponding concurrency status. This problem often brings disastrous results to the application. Here we introduce the following active risks: deadlock, thread hunger, weak responsiveness, and live lock.

Deadlock

When a thread occupies a lock forever and other threads attempt to obtain the lock, the thread will be permanently blocked.

A typical example is the AB lock. Thread 1 gets the Lock of shared data A and thread 2 gets the Lock of shared data B, in this case, thread 1 wants to obtain the Lock of shared data B, and thread 2 gets the Lock of shared data. If it is expressed by the graph relationship, it is a loop. This is the simplest form of deadlock. For example, when we perform an update operation on batch unordered data, this problem also occurs if the unordered behavior causes two threads to compete for resources, the solution is to sort and then process it.

Thread hunger

Thread hunger means that a thread is permanently rejected when it accesses the resources it needs, so that it cannot continue the subsequent process. In this way, thread hunger occurs; for example, threads compete for CPU time slices, and threads with low or low priorities in Java are improperly referenced. Although the Java API defines the thread priority, this is only a behavior that is recommended to the CPU itself (note that the thread priority of different operating systems is not uniform, and the corresponding Java thread priority is not uniform), but this does not ensure that high-priority threads can be selected by the CPU for execution first.

Weak responsiveness

In GUI programs, client programs are generally run in the background, in the form of frontend feedback. When CPU-intensive background tasks compete for resources with foreground tasks, the front-end GUI may be frozen, so we can reduce the background program priority and ensure the best user experience as much as possible.

Live lock

Another manifestation of thread activity failure is that the thread is not blocked, but cannot continue because the same operation is continuously retried, but always fails.

The risk of thread activity should be avoided during development. This behavior can cause catastrophic consequences for the application. Summary

All content about the synchronized keyword has been introduced here. In this chapter, we hope you can understand that the reason why the lock is "heavy" is due to the upgrade of competition between threads. In real development, we may have other options, such as the Lock interface, which outperforms the built-in Lock implementation in some concurrent scenarios.

Both the built-in locks and the Lock interface are used to ensure the security of concurrency. the concurrent environment generally needs to consider how to ensure secure access to shared objects. In the second chapter, we will detail the thread security issues caused by built-in objects and solutions.

Java Learning Group 669823128

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.