Explore concurrent programming (6) -- Java multi-thread Performance Optimization

Source: Internet
Author: User

Multithreading is nothing more than to improve performance. However, if multithreading is improperly used, not only does the performance not improve significantly, but also consumes more resources. The following lists the possible performance problems that may cause multithreading:

  • Deadlock
  • Excessive serialization
  • Excessive lock Competition
  • Switch Context
  • Memory Synchronization

The above performance risks will be resolved below

Deadlock

We know the cause and harm of deadlock when learning the operating system. Here, we will not describe it in principle, you can review the cause of the deadlock from the following code and illustration:

Public class leftrightdeadlock {<br/> private final object left = new object (); <br/> private final object right = new object (); <br/> Public void leftright () {<br/> synchronized (left) {<br/> synchronized (right) {<br/> dosomething (); <br/>}< br/> Public void rightleft () {<br/> synchronized (right) {<br/> synchronized (left) {<br/> dosomethingelse (); <br/>}< br/>

Methods to prevent and handle deadlocks:

1) Try not to compete with other locks before releasing them.

Generally, you can use the refined synchronization method to obtain the lock only when you really need to protect shared resources and release the lock as soon as possible, this effectively reduces the number of calls to other synchronous methods in the synchronous method.

2) Obtain lock resources in sequence

If it is impossible to avoid nesting requests for Lock resources, you need to develop a strategy for requesting lock resources, first plan which locks are available, and then each thread requests them in one order, do not show the order in the above example, so there will be potential deadlock Problems

3) Try timed lock

Java 5 provides more flexible lock tools to explicitly request and release locks. You can set a timeout time when requesting the lock. If the lock is not obtained after this time, the task will not be blocked but will be abandoned. The sample code is as follows:

Public Boolean trysendonsharedline (string message, <br/> long timeout, timeunit unit) <br/> throws interruptedexception {<br/> long nanostolock = unit. tonanos (timeout) <br/>-estimatednanostosend (Message); <br/> If (! Lock. trylock (nanostolock, nanoseconds) <br/> return false; <br/> try {<br/> return sendonsharedline (Message ); <br/>} finally {<br/> lock. unlock (); <br/>}< br/>

This effectively breaks the deadlock conditions.

4) Check deadlocks

The JVM uses the thread dump Method to Identify deadlocks. It can send the thread dump signal to the JVM through operating system commands, so that it can query which threads have deadlocks.

Excessive serialization

Multithreading is actually intended to do things in parallel, but these things have to be serialized due to some dependencies, which actually limits the scalability of the system, even if you add a CPU and a thread, the performance does not increase linearly. There is an Amdahl theorem to illustrate this problem:

F indicates the serialization ratio and N indicates the number of processors. It can be seen that only serializing can be minimized to maximize scalability. The key to reducing serialization is to reduce lock competition. When many parallel tasks are attached to the acquisition of locks, they are serialized.

Excessive lock Competition

The harm of excessive lock competition is self-evident, so let's look at some ways to reduce lock competition.

1) narrow the lock range

As mentioned above, we try to narrow down the scope of lock protection and fast forward. Therefore, we try not to use the synchronized keyword directly in the method, but only in places that really require thread security protection.

2) reduce lock Granularity

Java 5 provides explicit locks to more flexibly protect shared variables. The synchronized keyword (used in methods) uses the entire object as the lock by default. In fact, there is no need to use such a large lock, which will cause all synchronized of this class to be executed in serial mode. You can use the shared variables to be protected as locks or use more fine-grained policies. The objective is to serial the variables when serial is actually needed. For example:

Public class stripedmap {<br/> // synchronization policy: buckets [N] guarded by locks [n % n_locks] <br/> Private Static final int n_locks = 16; <br/> private final node [] buckets; <br/> private final object [] locks; <br/> Private Static class node {...} <br/> Public stripedmap (INT numbuckets) {<br/> buckets = new node [numbuckets]; <br/> locks = new object [n_locks]; <br/> for (INT I = 0; I <n_loc KS; I ++) <br/> locks [I] = new object (); <br/>}< br/> private final int Hash (Object key) {<br/> return math. ABS (key. hashcode () % buckets. length); <br/>}< br/> Public object get (Object key) {<br/> int hash = hash (key ); <br/> synchronized (locks [hash % n_locks]) {<br/> for (node M = buckets [hash]; m! = NULL; M = m. next) <br/> If (M. key. equals (key) <br/> return M. value; <br/>}< br/> return NULL; <br/>}< br/> Public void clear () {<br/> for (INT I = 0; I <buckets. length; I ++) {<br/> synchronized (locks [I % n_locks]) {<br/> buckets [I] = NULL; <br/>}< br/>... <br/>}< br/>

In the preceding example, the hash value corresponding to the accessed value is used as the lock through the hash algorithm. In this way, you only need to serialize the access to objects with the same hash value, instead of serializing any operation on any object like hashtable.

3) reduce the dependency on shared resources

Shared resources are the source of competitive locks. We should minimize the dependency on shared resources in multi-threaded development. For example, the technology of the Object pool should be carefully considered, the new JVM optimizes the newly created object and delivers excellent performance. If the object pool is used, the thread concurrency will be reduced due to lock competition.

4) use the read/write splitting lock to replace the exclusive lock

Java 5 provides a read/write splitting lock (readwritelock) to implement read-read concurrency, read-write serialization, and write-write serialization. This method further improves the concurrency, because most of the scenarios are read operations, so there is no need for serial work. For more information about readwritelock, see the following example:

Public class readwritemap <K, V >{< br/> private final map <K, V> map; <br/> private final readwritelock lock = new reentrantreadwritelock (); <br/> private final lock r = lock. readlock (); <br/> private final lock W = lock. writelock (); <br/> Public readwritemap (Map <K, V> map) {<br/> This. map = map; <br/>}< br/> Public v put (K key, V value) {<br/> W. lock (); <br/> try {<br/> return map. put (Key, value); <br/>}finally {<br/> W. unlock (); <br/>}< br/> // do the same for remove (), putall (), clear () <br/> Public v get (Object key) {<br/> r. lock (); <br/> try {<br/> return map. get (key); <br/>}finally {<br/> r. unlock (); <br/>}< br/> // do the same for other read-only map methods <br/>}< br/>

Switch Context

When there are many threads, the performance consumption of thread context switching in the operating system cannot be ignored. In the process of building a high-performance web-web server with persistent connections
We can see the cost of process switching. Of course, the thread will be lighter, but the principle is similar.

Memory Synchronization

When synchronized, volatile, or lock is used, more memory synchronization will be performed to ensure visibility. Therefore, the jmm structure cannot be used for performance optimization.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.