Java High concurrency nine: lock optimization and notes detailed _java

Source: Internet
Author: User
Tags cas garbage collection hash mutex static class thread class stringbuffer

Summary

This series is based on the gold course, in order to better study, do a series of records. This article mainly introduces: 1. The idea and method of lock optimization 2. Lock optimization within a virtual machine 3. Case of an error using the lock 4. Threadlocal and its source code analysis

1. Lock optimization Ideas and methods

The concurrency level is mentioned in the preface to [High concurrency Java one].

Once the lock is used, it means that this is blocking, so in the concurrency is generally less than the lock-free situation.

The lock optimization mentioned here is about how to keep the performance from getting too bad in a blocking situation. But how to optimize, in general performance will be more than the lock-free situation almost.

Note here that the Trylock in the Reentrantlock mentioned in the [high concurrent Java v] JDK and the contract 1, is biased in a way that is not locked because it does not suspend itself when Trylock is judged.

Lock optimization ideas and methods to sum up, there are several.

    1. Reduce lock hold Time
    2. Reduce the size of the lock
    3. Lock separation
    4. Lock coarsening
    5. Lock elimination

1.1 Reduce lock holding time

Public synchronized void Syncmethod () { 
 othercode1 (); 
 Mutextmethod (); 
 Othercode2 (); 
 }

Like the code above, you get a lock before you enter the method, and other threads wait outside.

The point of optimization here is to reduce the time that other threads are waiting for, so use only the lock on the program that the thread security requires

public void Syncmethod () { 
 othercode1 (); 
 Synchronized (this)
 {
 mutextmethod (); 
 }
 Othercode2 (); 
 }

1.2 Reduce the size of the lock

Large objects (This object may be accessed by many threads), split into small objects, greatly increase the degree of parallelism, reduce the lock competition. Reduce the lock competition, bias lock, lightweight lock success rate will increase.

The most typical case of reducing the lock granularity is concurrenthashmap. This is mentioned in the [high concurrent Java five] JDK and contract 1.

1.3 Lock separation

The most common lock separation is read and write lock Readwritelock, according to the function of the separation into read locks and write locks, so read not mutually exclusive, read and write mutual exclusion, write mutual exclusion, that is, to ensure thread safety, but also improve performance, specifically please see [high concurrent Java five] JDK and contract 1.

Read and write separation ideas can be extended, as long as the operation does not affect each other, the lock can be separated.

Like Linkedblockingqueue.

Remove from the head and put data from the tail. It is also similar to the job theft in the Forkjoinpool, referred to in the "high concurrent Java VI" JDK and in contract 2.

1.4 Lock Coarsening

In general, in order to ensure effective concurrency between multithreading, each thread is required to hold the lock for as short a time as possible, that is, after the use of public resources, should immediately release the lock. Only in this way can other threads waiting on this lock get resources to perform tasks as soon as possible. However, there is a degree of all things, if the same lock on the continuous request, synchronization and release, it will consume the system's valuable resources, but it is not conducive to performance optimization.

As an example:

public void DemoMethod () { 
 synchronized (lock) { 
 //do sth. 
 } 
 Do other unwanted sync work, but you can finish it quickly 
 synchronized (lock) { 
 //do sth. 
 } 
 }

This situation, according to the thought of the lock coarsening, should be merged

public void DemoMethod () { 
 //consolidated into a lock request 
 synchronized (lock) { 
 //do sth. 
 Do other tasks that are not needed for synchronization, but can be completed quickly 
 }
 

Of course, this is a prerequisite, if the middle of those who do not need to sync the work is done quickly.
To give an extreme example:

for (int i=0;i<circle;i++) { 
 synchronized (lock) { 
 
 } 
 }

Locks are acquired differently within a loop. While the JDK is optimized for this code, it might as well be written directly

Synchronized (lock) {for 
 (int i=0;i<circle;i++) { 
 
 } 
 }

Of course, if there is a need to say, such a cycle too long, need to give other threads do not wait too long, it can only be written as above. If there is no such demand, it is better to write the following directly.

1.5 Lock elimination

Lock elimination is the thing at the compiler level.

In an instant compiler, you can eliminate the lock operations of these objects if you find objects that cannot be shared.

You may find it strange that since some objects cannot be accessed by multithreading, why lock them? Writing code directly without locking is not good.

But sometimes, these locks are not written by programmers, some are locked in JDK implementations, such as vectors and StringBuffer, many of which are locked. When we use the methods of these classes in cases where there is no thread security, the compiler will eliminate the lock to improve performance when certain conditions are reached.

Like what:

public static void Main (String args[]) throws Interruptedexception {
 long start = System.currenttimemillis ();
 for (int i = 0; i < 2000000 i++) {
 createstringbuffer ("JVM", "diagnosis");
 }
 Long buffercost = System.currenttimemillis ()-Start;
 System.out.println ("Craetestringbuffer:" + buffercost + "MS");
 }

 public static string Createstringbuffer (string s1, string s2) {
 StringBuffer sb = new StringBuffer ();
 Sb.append (S1);
 Sb.append (S2);
 return sb.tostring ();
 }

The stringbuffer.append in the above code is a synchronous operation, but StringBuffer is a local variable and the method does not return the StringBuffer, so it is not possible for multiple threads to access it.
Then the synchronous operation in StringBuffer is meaningless at this time.

Unlock is set on JVM parameters, and of course it needs to be in server mode:

-server-xx:+doescapeanalysis-xx:+eliminatelocks

And to open the escape analysis. The role of escape analysis is to see if a variable is likely to escape the scope of scopes.
For example, in the above StringBuffer, the return of Craetestringbuffer in the above code is a string, so this local variable stringbuffer not be used anywhere else. If you change the craetestringbuffer into

public static StringBuffer Craetestringbuffer (string s1, string s2) {
 StringBuffer sb = new StringBuffer ();
 Sb.append (S1);
 Sb.append (S2);
 return SB;
 }

Then this stringbuffer is returned and is likely to be used by any other place (for example, by the main function will return the result put into the map Ah, etc.). Then the escape analysis of the JVM can be analyzed and the local variable stringbuffer out of its scope.

So based on the escape analysis, the JVM can judge that if the local variable StringBuffer does not escape its scope, then it can be determined that the StringBuffer will not be accessed by multithreading, then the redundant locks can be removed to improve performance.

When the JVM parameters are:

-server-xx:+doescapeanalysis-xx:+eliminatelocks

Output:

craetestringbuffer:302 ms

The JVM parameters are:

-server-xx:+doescapeanalysis-xx:-eliminatelocks

Output:

craetestringbuffer:660 ms

Obviously, the effect of the lock elimination is still very obvious.

2. The lock optimization in the virtual machine

The first thing to do is to introduce the next object header, in which each object has an object header.

Mark Word, Object header tag, 32-bit (32-bit system).

Describes the object's hash, lock information, garbage collection mark, age

A pointer to the lock record is also saved, pointing to the pointer to the monitor, biased to the lock thread ID, and so on.

In simple terms, the object header is to save some systematic information.

2.1 Bias Lock

The so-called bias, is the eccentricity, that is, the lock will be biased to the current possession of the lock thread.

Most of the time, there is no competition (a synchronized block in most cases will not appear multi-threaded simultaneous competition lock), so you can improve performance by bias. That is, in the absence of competition, the thread that acquired the lock once again obtains the lock and then determines whether the lock is biased toward me, so that the thread will not have to acquire the lock again and go directly to the sync block.

The implementation of the bias lock is to set the mark of the object header to biased and write the thread ID to the object header mark

Bias mode ends when other threads request the same lock

JVM defaults to enable bias lock-xx:+usebiasedlocking

In highly competitive situations, biased locks increase the system burden (each time the judgment of bias is added)

Examples of biased locks:

Package test;

Import java.util.List;
Import Java.util.Vector;

public class Test {public
 static list<integer> numberlist = new vector<integer> ();

 public static void Main (string[] args) throws interruptedexception {
 long begin = System.currenttimemillis ();
 int count = 0;
 int startnum = 0;
 while (Count < 10000000) {
 numberlist.add (startnum);
 Startnum + 2;
 count++;
 }
 Long end = System.currenttimemillis ();
 System.out.println (End-begin);
 }



Vector is a thread-safe class that uses a lock mechanism inside. Each add locks a request. The above code has only one thread in main and repeatedly add request lock.
Use the following JVM parameters to set the bias lock:

-xx:+usebiasedlocking-xx:biasedlockingstartupdelay=0
Biasedlockingstartupdelay indicates that bias locks are enabled after a few seconds of system startup. The default is 4 seconds, because when the system is just started, the general data competition is more intense, enabling the bias lock can degrade performance.
Since this is to test for the performance of the lock, the delay-biased lock time is set to 0.

The output is now 9209

Turn off the bias lock below:

-xx:-usebiasedlocking

Output is 9627

In general, when there is no competition, enabling the bias lock performance will increase by about 5%.

2.2 Lightweight lock

Java multithreading security is implemented based on the lock mechanism, and lock performance is often unsatisfactory.

The reason is that Monitorenter and Monitorexit, the two bytecode primitives that control multi-threaded synchronization, are implemented by the JVM relying on operating system mutexes (mutexes).

Mutual exclusion is a resource-consuming operation that causes a thread to hang and, in a short time, needs to be dispatched back to the original thread.

In order to optimize the Java lock mechanism, the concept of lightweight locks was introduced from JAVA6.

Lightweight locks (lightweight locking) are meant to reduce the chance of multithreading being mutually exclusive, not to replace mutexes.

It utilizes the CPU primitives Compare-and-swap (CAS, assembler instruction Cmpxchg) to try to remedy before entering a mutex.

If the lock fails, the system will operate with a lightweight lock. It exists for the purpose of not using the operating system level of mutual exclusion as much as possible, because that performance will be poor. Because the JVM itself is an application, it is desirable to resolve thread synchronization issues at the application level.

To sum up is a lightweight lock is a fast locking method, before entering mutual exclusion, using CAS operation to try to lock, try not to use the operating system level of mutual exclusion, improve performance.

So when the lock fails, the lightweight lock steps:

1. Save the mark pointer of the object header to the lock object (the object here refers to the locked object, such as synchronized (this) {},this is the object here).

Lock->set_displaced_header (Mark);

2. Set the object header to a pointer to the lock (in-line stacks space).

If Mark = = (markoop) atomic::cmpxchg_ptr (lock, obj ()->mark_addr (), Mark)) 
 {  
 tevent (slow_enter:release Stacklock);  
 return; 
 }

Lock is located on line Cheng. So determine if a thread holds the lock, just to determine whether the object's head is pointing to the space in the stacks of the line.
If the lightweight lock fails, it means there is competition, which is upgraded to a heavyweight lock (regular lock), which is the synchronization method at the operating system level. In the absence of lock competition, lightweight locks reduce the performance losses that traditional locks produce using OS mutexes. When competition is very intense (lightweight locks always fail), lightweight locks do a lot of extra work, causing performance degradation.

2.3 Spin Lock

When competition exists, because of the failure of the lightweight lock attempt, it is possible to escalate directly into a heavyweight lock using the operating system-level mutex. It is also possible to try the spin lock again.

If a thread can get a lock quickly, then you can suspend the thread on the OS layer, let the thread do a few empty operations (spin), and try to get the lock (like Trylock), of course, the number of cycles is limited, when the number of cycles to achieve, still upgraded to a heavyweight lock. Therefore, when each thread holds less time for the lock, the spin lock is able to avoid the thread being suspended at the OS layer.

-xx:+usespinning Open in JDK1.6

JDK1.7, remove this parameter and replace it with a built-in implementation

If the synchronization block is very long, the spin failure can degrade system performance. If the synchronization block is very short, spin successfully, save thread suspend switch time, improve system performance.

2.4 Deflection lock, lightweight lock, Spin lock summary

The above locks are not a Java language-level lock optimization method and are built into the JVM.

The first preference for locks is to prevent a thread from repeatedly acquiring/releasing the same lock when the performance consumption, if still the same thread to obtain the lock, try to bias the lock will go directly into the synchronization block, do not need to acquire the lock again.

Lightweight locks and spin locks are designed to avoid direct invocation of the operating system-level mutex because suspending a thread is a resource-intensive operation.

In order to avoid the use of heavyweight locks (mutexes at the operating system level), first try lightweight locks, and lightweight locks attempt to obtain a lock using CAS operations, which indicates competition if the lightweight lock fails. But if you can get a lock soon, you'll try a spin lock, a few empty loops for the thread, and a constant attempt to get the lock every time you loop. If the spin lock fails, it can only be upgraded to a heavyweight lock.

Visible bias Lock, lightweight lock, spin lock are optimistic lock.

3. A case of incorrect use of locks

public class Integerlock {
 static Integer i = 0;

 public static class Addthread extends Thread {public
 void run () {for
 (int k = 0; k < 100000; k++) {
 synch Ronized (i) {
  i++
 }

 }}} public static void Main (string[] args) throws interruptedexception {
 Addthread t1 = new Addthread ();
 Addthread t2 = new Addthread ();
 T1.start ();
 T2.start ();
 T1.join ();
 T2.join ();
 System.out.println (i);
 }
}

A very elementary error is that in [high concurrency Java seven] concurrent design patterns, Interger is final, and after each + +, a new Interger is created and assigned to I, so two threads scramble for locks that are different. So it's not thread safe.

4. Threadlocal and its source analysis

It may be a bit inappropriate to mention threadlocal here, but Threadlocal is the way to replace the lock. So it is necessary to mention it.

The basic idea is that in a multi-threaded, data conflicts need to be locked, using threadlocal, to provide an object instance for each thread. A different thread accesses only its own objects, not other objects. So there is no need for the lock to exist.

Package test;

Import java.text.ParseException;
Import Java.text.SimpleDateFormat;
Import java.util.Date;
Import Java.util.concurrent.ExecutorService;
Import java.util.concurrent.Executors;

public class Test {
 private static final SimpleDateFormat SDF = new SimpleDateFormat (
 "Yyyy-mm-dd HH:mm:ss"); 
   
    public Static class Parsedate implements Runnable {
 int i = 0;

 Public parsedate (int i) {
 this.i = i;
 }

 public void Run () {
 try {
 Date t = sdf.parse ("2016-02-16 17:00:" + i%);
 SYSTEM.OUT.PRINTLN (i + ":" + t);
 } catch (ParseException e) {
 e.printstacktrace ();

 }}} public static void Main (string[] args) {
 Executorservice es = Executors.newfixedthreadpool (a);
 for (int i = 0; i < 1000 i++) {
 es.execute (new Parsedate (i));}
 }




   

Because SimpleDateFormat is not thread-safe, the code above is used incorrectly. The easiest way to do this is to define a class to use a synchronized wrapper (similar to Collections.synchronizedmap). This can be problematic in high concurrency, and contention for synchronized results in a single thread per session, with a low concurrency rate.

This solves this problem by using threadlocal to encapsulate SimpleDateFormat

Package test;

Import java.text.ParseException;
Import Java.text.SimpleDateFormat;
Import java.util.Date;
Import Java.util.concurrent.ExecutorService;
Import java.util.concurrent.Executors;

public class Test {
 static threadlocal<simpledateformat> tl = new threadlocal<simpledateformat> ();

 public static class Parsedate implements Runnable {
 int i = 0;

 Public parsedate (int i) {
 this.i = i;
 }

 public void Run () {
 try {
 if (tl.get () = = null) {
  tl.set (new SimpleDateFormat ("Yyyy-mm-dd HH:mm:ss")); c17/>}
 Date t = Tl.get (). Parse ("2016-02-16 17:00:" + i);
 SYSTEM.OUT.PRINTLN (i + ":" + t);
 } catch (ParseException e) {
 e.printstacktrace ();

 }}} public static void Main (string[] args) {
 Executorservice es = Executors.newfixedthreadpool (a);
 for (int i = 0; i < 1000 i++) {
 es.execute (new Parsedate (i));}
 }



Each thread at run time determines whether the current thread has a SimpleDateFormat object

if (tl.get () = null)

If not, the new SimpleDateFormat is bound to the current thread

Tl.set (New SimpleDateFormat ("Yyyy-mm-dd HH:mm:ss"));

And then use the current thread's SimpleDateFormat to parse

Tl.get (). Parse ("2016-02-16 17:00:" + i% 60);

In the beginning of the code, only one SimpleDateFormat, using the ThreadLocal, for each thread is a new simpledateformat.

Note that it is not useful to set a public simpledateformat to every threadlocal. Need to give each new one a simpledateformat.

In Hibernate, the threadlocal has a typical application.

Here's a look at Threadlocal's source code implementation

First, there is a member variable in the thread class:

Threadlocal.threadlocalmap threadlocals = null;

And this map is the key to Threadlocal's implementation.

public void Set (t value) {
  Thread t = thread.currentthread ();
  Threadlocalmap map = getmap (t);
  if (map!= null)
   Map.set (this, value);
  else
   createmap (t, value);
 

The value that corresponds to the set and get according to the threadlocal.

The THREADLOCALMAP implementations here are similar to HashMap, but there are differences in the handling of hash conflicts.

When a hash conflict occurs in a threadlocalmap, instead of using a linked list as HashMap to resolve the conflict, the index + + is placed at the next index to resolve the conflict.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.