Java Concurrent Programming Learning notes synchronized bottom optimization _java

Source: Internet
Author: User
Tags cas mutex stringbuffer

One, the heavy lock

In the last article, we introduced the usage of synchronized and the principle of its realization. Now we should know that synchronized is implemented through one of the object's internal call monitor locks. But the nature of the monitor lock is based on the underlying operating system's mutex lock. and the operating system to achieve the switch between threads this needs to transition from the user state to the nuclear mentality, the cost is very high, the transition between States need relatively long time, which is why synchronized inefficient reasons. Therefore, this lock, which relies on the operating system mutex lock, is what we call a "heavyweight lock." The core of the JDK's various optimizations for synchronized is to reduce the use of this heavyweight lock. JDK1.6, "lightweight locks" and "bias locks" are introduced in order to reduce the performance costs of acquiring locks and unlocking locks and improving performance.

Second, lightweight lock

There are four different states of the lock: No lock state, bias lock, lightweight lock and heavy lock. As the locks compete, locks can be upgraded from a bias lock to a lightweight lock, and then to an upgraded heavyweight lock (but the upgrade of the lock is one-way, that is, it can only be upgraded from low to high, without a lock downgrade). The default for JDK 1.6 is to turn on the bias lock and lightweight lock, and we can also disable the bias lock by-xx:-usebiasedlocking. The status of the lock is saved in the object's header file, with a 32-bit JDK as an example:

Lock status

Bit

4bit

1bit

2bit

23bit

2bit

Whether it is biased lock

Lock sign Bit

Lightweight locks

Pointer to lock record in stack

00

Heavy-weight lock

Pointer to mutex (heavyweight lock)

10

GC Tags

Empty

11

Bias Lock

Thread ID

Epoch

Object Generational age

1

01

No Locks

Hashcode of objects

Object Generational age

0

01


Lightweight is relative to traditional locks that are implemented using operating system mutexes. The first thing to emphasize, however, is that lightweight locks are not meant to be used in place of heavyweight locks, which are meant to reduce the performance costs of traditional heavyweight lock usage without multiple threads competing. Before interpreting the implementation of lightweight locks, it is understood that lightweight locks are adapted to situations where a thread alternately executes a synchronized block, and a lightweight lock expands to a heavyweight lock if the same lock is accessed at the same time.

1, the lock process of the lightweight lock

(1) When the code enters the sync block, if the Sync object lock State is unlocked (the lock sign bit is "01" state, is biased to "0", the virtual machine first creates a space in the current thread's stack frame called the lock record, which stores the lock object the current mark Word's copy, officially called displaced Mark Word. This is where the thread stack and the state of the object header are shown in Figure 2.1.

(2) Mark Word in the Copy object header is copied to the lock record.

(3) After the copy succeeds, the virtual machine will attempt to update the object's mark Word to a pointer to the lock record by using the CAS action and point the owner pointer in the lock record to object Mark Word. If the update succeeds, perform the step (3), or perform the step (4).

(4) If the update action succeeds, then the thread has a lock on the object, and the object Mark Word's lock flag bit is set to "00", which means that the object is in a lightweight lock state, when the thread stack and the state of the object header are shown in Figure 2.2.

(5) If the update operation fails, the virtual machine first checks whether the object's mark word points to the current thread's stack frame, and if it shows that the current thread already has the lock on the object, it can proceed directly into the synchronization block. Otherwise, the description of multiple threads competing locks, lightweight locks will be expanded to a heavyweight lock, the status of the lock flag changed to "Ten", Mark Word is stored in the weight of the lock (mutual exclusion) of the pointer, followed by the thread waiting for the lock to enter the blocking state. The current thread attempts to use the spin to acquire the lock, which is the process of looping to get the lock, in order to keep the thread from blocking.

Figure 2.1 The stack and the state of the object before the lightweight lock CAS operation

Figure 2.2 The stack and the state of an object after a lightweight lock CAS operation

2. The unlocking process of the lightweight lock:

(1) A CAS operation attempts to replace the displaced Mark Word object that is copied in the thread with the current Mark Word.

(2) If the replacement succeeds, the entire synchronization process is completed.

(3) If the replacement fails, another thread has tried to acquire the lock (at which time the lock has expanded), it is necessary to wake the suspended thread while releasing the lock.

Third, the bias lock

Biased locks are introduced to minimize unnecessary lightweight lock execution paths in the absence of multiple-threading competition, because the acquisition and release of lightweight locks relies on multiple CAs atomic directives. The preference lock only needs to be dependent on a CAS atomic instruction when replacing ThreadID (because the bias lock must be revoked in case of multithreading competition, the performance loss of the bias lock operation must be less than that of the saved CAs atomic instruction). As mentioned above, lightweight locks improve performance in order for threads to perform synchronized blocks alternately, while biased locks further improve performance when only one thread executes a synchronized block.

1, biased lock acquisition process:

(1) Access to mark Word in favor of the lock is set to the identity of 1, lock flag bit is 01--do think the state can be biased.

(2) If it is in a biased state, the test thread ID points to the current thread, and if so, enter the step (5), or enter the step (3).

(3) If the thread ID does not point to the current thread, the lock is contested through the CAS operation. If the competition succeeds, the mark Word centerline ID is set to the current thread ID and then executed (5); If the competition fails, execute (4).

(4) If the CAS acquires a bias lock failure, it indicates a competition. A thread that obtains a bias lock when the global security Point (SafePoint) is reached is suspended, and the lock is upgraded to a lightweight lock, and the thread that is blocked at the security point continues to execute the synchronization code down.

(5) Execute the synchronization code.

2, in favor of the release of the Lock:

The withdrawal of the bias lock is mentioned in the step four above. Biased locks the thread that holds the biased lock releases the lock only when other threads try to compete for the lock, and the thread does not take the initiative to release the biased lock. The preference for lock revocation requires waiting for the global security point (No bytecode is executing at this point in time), it first pauses the thread that owns the bias lock, determines whether the lock object is locked, restores the lock to an unlocked (flag bit "01"), or a lightweight lock (with a flag bit of "00").

3, weight-level lock, lightweight lock and bias lock conversion

Figure 2.3 Conversion diagram of three

The figure is mainly the summary of the above content, if the above content has a better understanding, the figure should be easy to read.

Iv. Other optimization

1. Adaptive Spin (Adaptive Spinning): from the process of lightweight lock acquisition we know that when a thread fails to perform a CAS operation in the process of acquiring a lightweight lock, it is a spin to get the heavyweight lock. The problem is that the spin is CPU-intensive, and if the lock is not secured, the thread is spinning and wasting CPU resources. The easiest way to solve this problem is to specify the number of spins, such as looping it 10 times, and entering a blocking state if the lock has not been acquired. But the JDK uses a smarter approach-adaptive spin, which simply means that if the spin succeeds, the next spin will be more frequent, and if the spin fails, the spin will be reduced.

2, Lock coarsening (lock coarsening): The concept of the lock coarsening should be better understood, that is, many times connected to the lock, unlock operations merged into one, the number of consecutive locks to expand into a larger range of locks. As an example:

Package com.paddx.test.string;

public class Stringbuffertest {
  StringBuffer stringbuffer = new StringBuffer ();

  public void Append () {
    stringbuffer.append ("a");
    Stringbuffer.append ("B");
    Stringbuffer.append ("C");
  }


Each call to the Stringbuffer.append method requires locking and unlocking. If a virtual machine detects a series of interlocking and unlocking operations on the same object, it merges it into a larger lock-and-lock operation, that is, when the first append method is locked, and after the last Append method is completed To unlock it.

3, Lock elimination (lock elimination): lock elimination is to remove unnecessary lock operation. According to code escape technology, if you judge a piece of code, the data on the heap does not escape the current thread, then you can think that the code is thread-safe, do not need to lock. Look at the following procedure:

Package com.paddx.test.concurrent;

public class SynchronizedTest02 {public

  static void Main (string[] args) {
    SynchronizedTest02 test02 = new Synchro nizedTest02 ();
    Start preheating for
    (int i = 0; i < 10000; i++) {
      i++;
    }
    Long start = System.currenttimemillis ();
    for (int i = 0; i < 100000000 i++) {
      test02.append ("abc", "Def");
    }
    System.out.println ("time=" + (System.currenttimemillis ()-start);
  }

  public void Append (string str1, String str2) {
    StringBuffer sb = new StringBuffer ();
    Sb.append (STR1). Append (str2);
  }


Although StringBuffer's append is a synchronous method, the StringBuffer in this program belongs to a local variable and does not escape from the method, so the process is thread-safe and can be eliminated. Here are the results of my local execution:

In order to minimize the impact of other factors, the bias Lock (-xx:-usebiasedlocking) is disabled here. Through the above program, you can see that the removal of the lock after the performance of a relatively large upgrade.

Note: The results of the possible JDK versions are different, and the JDK version I use here is 1.6.

V. Summary

This paper focuses on the optimization of synchronized using lightweight lock and bias lock in JDK, but these two kinds of locks are not completely without flaws, for example, when the competition is more intense, not only can not improve efficiency, but will reduce efficiency, because more than one lock upgrade process, this time need to pass-XX: -usebiasedlocking to disable the bias lock. The following are the comparisons of these locks:

Lock

Advantages

Disadvantages

Applicable scenarios

Bias Lock

Locking and unlocking does not require additional consumption, and the implementation of asynchronous methods is more than a nanosecond-level gap.

If there is a lock competition between threads, it will lead to additional lock-revocation consumption.

Applies to only one thread accessing the synchronization block scene.

Lightweight locks

The competing threads do not block, increasing the response speed of the program.

A thread that does not always have a lock to compete uses a spin that consumes the CPU.

The pursuit of response time.

The synchronization block executes very quickly.

Heavy-weight lock

Thread competition does not use spin and does not consume CPU.

Thread blocking, slow response time.

Pursuit of throughput.

The synchronization block executes more quickly.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.