Using the Synchronized keyword

Source: Internet
Author: User

Using the Synchronized keyword to solve concurrency problems is the simplest way, and we only need to use it to decorate code blocks, methods, or field properties that need to be processed concurrently, the virtual machine automatically locks and releases locks, and blocks the thread that cannot get locks on the corresponding blocking queue.

Basic use

In our previous article, we introduced the basic concepts of threading, referring to the benefits of multithreading, maximizing CPU efficiency, more friendly interactions, and so forth, but also raised problems such as race conditions, memory visibility issues.

We cite one of the examples from the previous article:

100 threads randomly add one to count, and because of the non-atomicity of the self-increment operation, the incorrect access between multithreading causes the final value of count to be indeterminate, and the expected result is never obtained.

Using synchronized can be solved instantly, see code:

The code is slightly modified, the current program no matter how many times you run, or you increase the concurrency, the last count of the value is always the correct 100.

What do you mean, maybe?

In our JAVA, there is a "built-in lock" for each object, and the code in synchronized tries to get the lock of an object before it is executed by the thread, and if it succeeds, it enters and executes the code smoothly, otherwise it will be blocked on the object.

In addition to this, synchronized can be modified directly on the method, as well as the code block, for example:

public synchronized void addCount(){......}
public static synchronized void addCount(){......}

This is used in two different ways, the former is an instance method decorated with synchronized, so synchronized uses the "built-in lock" of the instance that the current method call belongs to. That is, the Addcount method is called before attempting to obtain a lock on the calling instance object.

The latter Addcount method is a static method, so synchronized uses the lock of the class object that Addcount belongs to.

The use of synchronized is still very simple, when the lock, when the release of the lock does not need us to worry about, is packaged by the JVM, below we will briefly see how the JVM implementation of this indirect locking mechanism.

Basic implementation Principles

Let's look at a simple code first:

public class TestAxiom {    private int count;    @Test    public void test() throws InterruptedException {        synchronized (this){            count++;        }    }}

This is a very simple code that uses synchronized to decorate blocks of code to protect count++ operations. Now we decompile:

As you can see, the compiler adds a monitorenter instruction before executing the count++ instruction, and a monitorexit instruction is added at the end of the count++ instruction execution. In an accurate sense, this is the two-lock release lock instruction, details we'll look at later.

In addition, our synchronized method does not have these two instructions after decompile, but the compiler sets a flag bit acc_synchronized in the flags property of the method table.

In this way, each thread checks to see if the state bit is 1 before calling the method, and if the status is 1 it is a synchronous method that needs to execute the monitorenter instruction first to try to get the built-in lock of the current instance object and execute the monitorexit instruction to release the lock at the end of the method execution.

is essentially the same, but the synchronized method is an implicit implementation. Let's take a look at the specifics of this built-in lock.

An object in Java consists mainly of the following three types of data:

    • Object header: Also known as Mark Word, the hash value of the main stored object and the associated lock information.
    • Instance data: The data that holds the current object, including the parent class property information, and so on.
    • Populate data: This section is required by the JVM, where the starting address of each object must be a multiple of 8, so if the current object is less than 8 times the number section is used for byte padding.

Our "Built-in lock" is inside the object header, and one of the basic structures of Mark Word is this:

Not to worry about what is, light weight lock, weight lock, biased lock, spin lock, this is a virtual machine lock optimization mechanism, through the lock expansion to optimize performance, this point of detail we will introduce later, you first unify them as a lock.

Each lock will have a flag bit to distinguish the lock type, and a pointer to the lock record, which means that the lock pointer will be associated with another structure, Monitor record.

The Owner field stores the unique identification number of the thread that owns the current lock, and when a thread has the lock, it writes its own thread number to the field. If a thread finds that the Owner field here is not NULL or its own thread number, it will be blocked on the Monitor's blocking queue until a thread steps out of the synchronization code block and initiates a wake operation.

Summing up, the synchronized modified code block or method in the compiler will be inserted additional two instructions, Monitorenter will check the object header lock information, corresponding to a Monitor structure, if the Owner of the structure of the field is already occupied, Then the front-end will be blocked on a blocking queue in Monitor until the locked thread releases the lock and evokes a new wave of lock contention.

Several characteristics of synchronized

1, re-entry sex

An object often has multiple methods, some of which are synchronous, and some are unsynchronized, so if a thread has acquired a lock on an object and entered one of its synchronization methods, and this synchronous method also needs to call another synchronization method of the same instance, is it necessary to re-compete the lock?

This is necessary to re-compete for some locks, but our synchronized is "reentrant", that is, if the current thread obtains a lock on an object, then all methods of that object can be called without a competing lock.

The reason is also very simple, the monitorenter command to find the Monitor, see the value of the Owner field equals the thread number of the current thread, so the Nest field is incremented by one, indicating that the current thread holds the lock of the object multiple times, and each call Monitorexit a Nest value.

2. Memory visibility

Cite an example of the previous article:

Thread Threadtwo constantly listen to the value of flag, and our main thread has modified flag, because of memory visibility, threadtwo invisible, so the program has been dead loop.

In a sense, synchronized is able to solve this kind of memory visibility problem, modify the code as follows:

The main line enters upgradeable obtains the built-in lock of obj and then starts the Threadtwo thread, which is blocked by acquiring a lock of obj, which means that it knows that there are other threads operating the shared variable, so be sure to reread the shared variable from memory when you get the lock.

Our main thread flushes the values of all global variables in the private working memory to the memory space when the lock is released, which in fact implements the memory visibility between multiple threads.

Of course, one thing to note is that the synchronized-modified block of code refreshes its changed global variables when the lock is released, but the other thread has to see it and must also reread it from memory. In general, not when you add the synchronized thread will read the data from the memory, and only it after the competition of a lock failed, know that there are other threads are modifying the shared variable, so that the premise to wait until they have a lock to re-brush the memory data.

You can also try to let the threadtwo thread do not compete with the obj lock, but give it an object, the result will still be a dead loop, the value of flag will only be threadtwo from memory at the start of the initial data read into the cache version.

But to be honest, to solve the memory visibility and use the synchronized cost is too high, need to lock and release the lock, even need to block and wake up the thread, we generally use the keyword volatile directly decorated on the variable, so that the variable is read and modified directly mapped memory, Does not go through the thread local private working memory.

About Synchronized keyword We introduce this for the time being, the following will also involve it, we also introduce the recent JDK version of the synchronized optimization details, including spin lock, bias lock, the lock expansion mechanism between the heavy lock, is also this optimization makes the current Synchronized performance is not lost on Lock.

All the code, pictures, and files in the article are stored on my GitHub:

(Github.com/singleyam/overview_java)

Welcome to the public number: Onejavacoder, all articles will be synchronized to the public number.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.