Synchronized keyword
Synchronized, we call the lock, mainly used to lock the method, code block. When a method or block of code uses synchronized, at most one thread at the same time executes the code. When more than one thread accesses the lock method/code block of the same object, only one thread is executing at the same time, and the remaining threads must wait for the current thread to execute before executing the code snippet. However, the rest of the threads can access the unlocked code block in the object.
Synchronized mainly includes two kinds of methods: Synchronized method, synchronized block.
Synchronized method
Declare the Synchronized method by adding the Synchronized keyword to the method declaration. Such as:
Public synchronized void GetResult ();
The Synchronized method Controls access to class member variables. How does it avoid the access control of class member variables? We know that the method uses the Synchronized keyword to indicate that the method has been locked, and that any thread must determine whether the method has other threads in "exclusive" when accessing the modified method. Each class instance corresponds to a lock, each synchronized method must call the lock of the method's class instance to execute, otherwise the owning thread is blocked, and once the method is executed, the lock is exclusive until the lock is freed from the method returning, and the blocked thread is able to obtain the lock.
In fact, the synchronized method is flawed, if we declare a large method of synchronized will greatly affect the efficiency. If multiple threads are accessing a synchronized method, then only one thread at a time executes the method, while the other threads must wait, but if the method does not use synchronized, all threads can execute it at the same time, reducing the total execution time. So if we know that a method will not be executed by multiple threads or that there is no resource sharing problem, then the Synchronized keyword is not required. But if you have to use the Synchronized keyword, then we can synchronized the code block to replace the synchronized method.
Synchronized block
The synchronized code block works just like the synchronized method, except that it makes the critical area as short as possible, in other words: It protects only the shared data that is needed, and the rest of the long blocks of code are left to do so. The syntax is as follows:
Synchronized (object) {
//Allow access control code
}
if we need to use the Synchronized keyword in this way, then we have to take an object reference as an argument, Usually this parameter we often use for this.
Synchronized (this) {
//Allow access control code
}
For synchronized (this) there are the following understandings:
1. When two concurrent threads access the synchronized (this) synchronized code block in the same object, only one thread can be executed at a time. Another thread must wait until the current thread finishes executing the code block before executing the code block.
2. However, when a thread accesses a synchronized (this) synchronization code block of object, another thread can still access the synchronized (this) synchronization code block in object.
3, especially crucially, when a thread accesses a synchronized (this) synchronization code block of object, other threads will block access to all other synchronized (this) synchronized code blocks in object.
4. The third example also applies to other synchronized code blocks. That is, when a thread accesses a synchronized (this) synchronization code block of object, it obtains the object lock of the objects. As a result, access to all of the synchronization code portions of the object object by other threads will be temporarily blocked.
Lock
there is a "arrival" principle in Java multithreading, which means who gets the key first and who uses it first. We know that in order to avoid the problem of resource competition, Java uses the synchronization mechanism to avoid, and the synchronization mechanism is controlled by the lock concept. So how does a lock manifest in a Java program? Here we need to figure out two concepts:
What is a lock? In daily life, it is a door, box, drawer and other objects on the seal, to prevent others peeping or stealing, play a protective role. Similarly in Java, locks play a role in protecting objects, and if a thread has exclusive resources, then other threads do not want to use it. Wait till I'm done with it!
In the Java Program Runtime environment, the JVM needs to reconcile data shared by two types of threads:
1. Instance variables saved in the heap
2, the class variables saved in the method area.
In a Java virtual machine, each object and class are logically associated with a monitor. For objects, the associated monitor protects the instance variables of the object. For classes, the monitor protects class variables for classes. If an object does not have an instance variable, or if a class has no variables, the associated monitor is not monitored.
To achieve the exclusive monitoring capabilities of the monitor, the Java Virtual Machine Associates a lock for each object and class. Represents a privilege that only one thread is allowed to have at any time. The thread accesses the instance variable or class variable without locking. If a thread acquires a lock, it is impossible for other threads to acquire the same lock until it releases the lock. A thread can lock the same object multiple times. For each object, the Java Virtual Machine maintains a lock counter, each time the thread obtains the object, the counter adds 1, each release, the counter is reduced by 1, when the counter value is 0 o'clock, the lock is completely released.
Java programmers do not need to lock themselves, object locks are used internally by Java virtual machines. In a Java program, you can flag a monitoring area by using only the synchronized block or the Synchronized method. The Java virtual machine automatically locks objects or classes each time you enter a monitoring area.
A simple lock.
When using synchronized, we use locks like this:
public class ThreadTest {public
void Test () {
synchronized (this) {
//do something
}
}}
Synchronized can ensure that only one thread executes dosomething at the same time. The following is the use of lock instead of synchronized:
public class ThreadTest {
lock lock = new Lock ();
public void Test () {
lock.lock ();
Do something
lock.unlock ();
}
The lock () method locks the lock instance object so that all threads that invoke the lock () method on the object are blocked until the unlock () method of the lock object is invoked.
What's the lock?
Before this question we have to be clear: whether the Synchronized keyword is added to the method or object, the lock it obtains is an object. In Java, each object can be used as a lock, which is mainly embodied in the following three aspects:
For synchronization methods, the lock is the current instance object.
For a synchronized method block, the lock is an object configured in synchonized brackets.
For static synchronization methods, the lock is the class object for the current object.
First, let's look at the following example:
public class Threadtest_01 implements runnable{
@Override public
synchronized void Run () {for
(int i = 0; I & Lt 3; i++) {
System.out.println (Thread.CurrentThread (). GetName () + "Run ...");
}
public static void Main (string[] args) {
for (int i = 0; i < 5; i++) {
new Thread (New threadtest_01 (), Thread_ "+ i). Start ();}}}
Partial Run Results:
Thread_2run ...
Thread_2run ...
Thread_4run ...
Thread_4run ...
Thread_3run ...
Thread_3run ...
Thread_3run ...
Thread_2run ... Thread_4run ...
This result is a bit different from what we expected (these threads are running around here) and, logically, the Run method, with the Synchronized keyword, produces a synchronization effect, which should be followed by one execution of the Run method. In the above LZ mentioned, a member method plus the Synchronized keyword, is actually to the member method to add a lock, the specific point is that this member method is the object itself as Object lock. But in this case we have a total of 10 new ThreadTest objects, which each hold the object lock of its own thread object, which must not produce the effect of synchronization. So: If these threads are to be synchronized, the object locks held by these threads should be shared and unique!
Is that the object that synchronized locked at this time? It is locked to call this synchronization method object. That is, threadtest this object performs a synchronization method in a different thread, it forms a mutex. Achieve the effect of synchronization. So the new Thread above (new threadtest_01 (), "thread_" + i). Start (); Modify to New Thread (ThreadTest, "thread_" + i). Start (); it's OK.
For synchronization methods, the lock is the current instance object.
The example above is using the Synchronized method, and we're looking at the synchronized code block:
public class Threadtest_02 extends thread{
private String lock;
private String name;
Public threadtest_02 (String name,string Lock) {
this.name = name;
This.lock = lock;
}
@Override public
Void Run () {
synchronized (lock) {for
(int i = 0; i < 3; i++) {
System.out.println ( Name + "Run ...");
public static void Main (string[] args {
string lock = new String ("test");
for (int i = 0; i < 5; i++) {
new threadtest_02 ("Threadtest_" + I,lock). Start ();}}
Run Result:
Threadtest_0 Run ...
Threadtest_0 Run ...
Threadtest_0 Run ...
Threadtest_1 Run ...
Threadtest_1 Run ...
Threadtest_1 Run ...
Threadtest_4 Run ...
Threadtest_4 Run ...
Threadtest_4 Run ...
Threadtest_3 Run ...
Threadtest_3 Run ...
Threadtest_3 Run ...
Threadtest_2 Run ...
Threadtest_2 Run ...
Threadtest_2 Run ...
In the main method we created a string object lock and gave this object a private variable lock for each ThreadTest2 thread object. We know that there is a string pool in Java, so the lock private variable for these threads actually points to the same area in the heap memory, which holds the area of the lock variable in the main function, so the object lock is unique and shared. Thread Sync!!
Here the synchronized lock is the string object.
For a synchronized method block, the lock is an object configured in synchonized brackets.
public class Threadtest_03 extends thread{public
synchronized static void Test () {for
(int i = 0; i < 3; i++) {
System.out.println (Thread.CurrentThread (). GetName () + "Run ...");
}
@Override public
Void Run () {
test ();
}
public static void Main (string[] args) {for
(int i = 0; i < 5; i++) {
new threadtest_03 (). Start ()
}
}
Run Result:
Thread-0 Run ...
Thread-0 Run ...
Thread-0 Run ...
Thread-4 Run ...
Thread-4 Run ...
Thread-4 Run ...
Thread-1 Run ...
Thread-1 Run ...
Thread-1 Run ...
Thread-2 Run ...
Thread-2 Run ...
Thread-2 Run ...
Thread-3 Run ...
Thread-3 Run ...
Thread-3 Run ...
In this instance, the Run method uses a synchronous method and is a static synchronization method, so what is the synchronized lock here? We know that static is detached from the object and that it belongs to the class level. So, object locks are class instances of the classes where the static drop is made. Because in the JVM, all loaded classes have unique class objects, which are the only Threadtest_03.class objects in the instance. No matter how many instances of the class we create, its class instance is still one! So object locks are unique and shared. Thread Sync!!
For static synchronization methods, the lock is the class object for the current object.
If a synchronized static function A is defined in a class, and a synchronized instance function B is defined, then the same object obj of this class, which accesses A and B two methods separately in multiple threads, does not constitute a synchronization. Because their locks are not the same. The lock of a method is the object of obj, and the lock of B is the class that obj belongs to.
Upgrade of Locks
Java lock in a total of four states, no lock state, biased lock state, lightweight lock state and heavyweight lock state, it will gradually upgrade with the competitive situation. Locks can be upgraded but cannot be degraded, meaning that a preference lock is upgraded to a lightweight lock and cannot be degraded to a biased lock. This lock escalation strategy, which is not degraded, is designed to improve the efficiency of acquiring locks and releasing locks. The main part of the following is the blog: Chat concurrency (ii) Java SE1.6 in the synchronized summary.
Lock spin
We know that when a thread enters a synchronized method/code block and finds that the Synchronized method/code block is now occupied by the other, it waits and goes into a blocking state, which has a low performance.
In the face of the lock of contention may wait, the thread can not be so anxious to enter the blocking state, but wait to see if the lock is not immediately released, this is the lock spin. The lock spins can be optimized for threading to some extent.
Bias Lock
The preference lock is mainly to solve the problem of lock performance in the absence of competition. In most cases, locking not only does not exist in multi-threaded competition, but is always obtained by the same thread multiple times, in order to allow threads to obtain a lock at a lower cost and introduce a bias lock. When a thread obtains a lock, the thread can lock the object multiple times, but each time it performs such an operation because of the cost-consuming performance of the CAS (CPU's COMPARE-AND-SWAP instruction) operation, in order to reduce this overhead, the lock favors the first thread that obtains it. If the lock is not acquired by another thread during the subsequent execution, the thread holding the biased lock will never need to be synchronized again.
When other threads try to compete for a lock, the thread that holds the lock will release the lock.
Lock expansion
Several or more calls to the small size of the lock, the lock unlock the consumption, but also less than a large size lock call to efficient.
Lightweight locks
Lightweight lock Lifting program synchronization performance based on "for the vast majority of the lock, in the entire synchronization period is not competitive", this is an empirical data. A lightweight lock creates a space called a lock record in the current thread's stack frame to store the current point and state of the lock object. If there is no competition, lightweight locks use a CAS operation to avoid the overhead of using a mutex, but if there is a lock competition, there is an extra CAS operation in addition to the cost of mutexes, so lightweight locks are slower than traditional heavyweight locks in competitive situations.
The fairness of the lock
The opposite of fairness is hunger. So what is "starvation"? If a thread does not get CPU uptime because other threads have been hogging the CPU, we call the thread "starved to death". The solution to hunger is called "fairness"--all threads are fairly able to get the CPU running.
There are several main causes of thread starvation:
High priority threads gobble up all the low priority threads ' CPU time. We can set the priority for each thread individually, from 1 to 10. The higher the priority thread gets the more CPU time. For most applications, it is best not to change the priority value.
Threads are permanently blocked in a state that waits to enter the synchronized block. Java's synchronized code area is an important factor in causing thread starvation. Java's synchronized code block does not guarantee the sequence of threads that enter it. This means that in theory one or more threads will always be blocked when attempting to enter the Sync code area, because other threads are always better than him to gain access, causing it to be "starved to death" without the CPU running chance.
The thread waits for an object that is itself permanently waiting to be completed. If multiple threads are on the Wait () method execution, and the call to notify () does not guarantee which thread will be awakened, any thread may be in the state of waiting. So there is a risk that a waiting thread never wakes up, because other waiting threads can always be awakened.
To solve the thread "starvation" problem, we can use the lock to achieve fairness.
The reentrant nature of Locks
We know that when a thread requests an object that holds a lock by another thread, the thread blocks, but does it succeed when the thread requests an object that holds the lock itself? The answer is to be successful, and the guarantee of success is the "reentrant" of the thread lock.
"Reentrant" means that you can get your own internal locks again without blocking. As follows:
public class Father {public
synchronized void method () {
//do something
}
} public
class child Extends father{public
synchronized void method () {
//do something
super.method ();
}
If it is not reentrant, the code above will deadlock. Because the method () of the child is invoked, it first acquires the built-in lock of the parent class father and then acquires the child's built-in lock, which, when invoked on the parent class, needs to be followed by the built-in lock of the parent class, which may fall into deadlock if not reentrant.
The realization of Java multithreading is to associate a request calculation with each lock and a thread that occupies it, and when the count is 0 o'clock, the lock is not occupied, so any thread can gain possession of the lock. When a thread requests success, the JVM records the hold thread of the lock and sets the count to 1, which must wait if the other thread requests the lock. When the thread requests a request for a lock again, the count is +1, and when the owning thread exits the synchronized code block, the count is 1 until 0 o'clock, releasing the lock. Then other threads have access to the lock's possession.
Lock and its implementation class
Java.util.concurrent.locks provides a very flexible locking mechanism that provides a framework interface and class for locking and waiting conditions, unlike built-in synchronizations and monitors, which allow for more flexibility in the use of locks and conditions. Its class structure diagram is as follows:
Reentrantlock: A reentrant mutex that is the primary implementation of the lock interface.
Reentrantreadwritelock:
Readwritelock:readwritelock maintains a pair of related locks, one for read-only operations and another for write operations.
Semaphore: a counting semaphore.
Condition: The associated condition of a lock to allow a thread to acquire a lock and see if one of the waiting conditions is satisfied.
Cyclicbarrier: A synchronization helper class that allows a set of threads to wait on each other until a common barrier point is reached.