[Java Performance] JVM thread Optimization

Source: Internet
Author: User

[Java Performance] JVM thread Optimization
Adjust the thread stack space

When memory is very short, you can adjust the memory used by the thread. Each thread has a stack to record the call stack information of the thread. The default stack space in the thread is determined by the OS and JVM versions:

OS 32-bit 64-bit
Linux 320 KB 1 MB
Mac OS N/ 1 MB
Solaris 512 KB 1 MB
Solaris X86 320 KB 1 MB
Windows 320 KB 1 MB

When the stack space is set for an hour, StackOverflowError may be thrown due to a long call stack.

In 64-bit JVM, you do not need to modify this value unless the memory is indeed very tight. In a 32-bit JVM, you can set this value from kb to free up more space for the heap memory.

Instructions for changing the thread stack space:-Xss=NFor example:-Xss=256k

Biased Locking)

When the lock is competed by multiple threads, JVM and OS can choose the thread to which the lock is assigned. A fair policy can be used to allocate locks to other threads, or an unfair (biased) policy can be used, for example, assign the lock to the last thread that owns the lock.

Allocate the lock to the thread that owns the lock again. The reason for this is that due to time continuity, the processor may also cache data related to the tasks executed by the thread. Therefore, when the thread executes again, the time needed to prepare the context can be reduced. Of course, the use of the biased lock itself needs to record some relevant data, so in some cases it will have an impact on performance.

For example, in many cases, if the biased lock is applied to the thread pool, the performance degrades. If an application does not need to use the biased lock as the lock allocation policy, you can use:-XX:-UseBiasedLockingTo disable it, because the biased lock is enabled by default, disabling it will improve performance.

Lock spin)

For threads that do not get the lock, the JVM has two methods:

  • Let the thread enter a Busy Loop. After it executes some commands, it will check again whether the required lock is available.
  • Let the thread enter a queue and notify it when the required lock is available. In this case, the CPU can be used by other threads.

    If the lock in the competition only needs to be held for a short period of time, use the first busy loop (also known as Thread spin )) the speed is much faster than the second way to let the thread enter the queue. On the contrary, when the lock in the competition is held by the thread for a long time, the second method is better, which can make the CPU usage more effective.

    The JVM selects the processing method reasonably. First, it will let the thread spin for a period of time. If there is no required lock, it will put the thread into the queue for waiting, so as to let out the CPU resources to other threads.

    Thread priority

    In Java APIs, each thread can be set with a priority. The OS will refer to this value. But note that the OS is only a "Reference" and does not always follow it. The OS calculates a "current" priority for each running thread. This computing process takes into account the set priority, but it is only one of the many factors. The most important factor is how long the thread has been running. This factor is taken into consideration to give every thread a chance to run. Therefore, no matter how low the priority a thread is set, they can always get the chance to run.

    In addition, the set thread priority has different weights on different operating systems. In Unix-based systems, the thread execution time is the main factor in the current priority of the thread, that is, the set thread priority is hardly referred ". In Windows, the thread priority is slightly higher.

    Therefore, no matter which OS the application is running, the performance of the application cannot depend on setting the priority of the thread. If the priority of some tasks is indeed higher than that of other tasks, this must be done by the logic of the application, rather than by setting the thread priority.

    One way is to allocate tasks to different thread pools and then set the size of these thread pools.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.