Basic concepts and levels of Java concurrency

Source: Internet
Author: User

Concurrency Concepts: concurrency (Concurrency) and parallelism (Parallelism)

Concurrency is biased toward multiple tasks alternating, and multiple tasks may be sequential. And parallelism is the true "simultaneous execution" of the meaning. In strict terms, multiple tasks in parallel are real and simultaneous, and for concurrency, the process is just alternating, and running task a a little later executes task B, and the system keeps switching between the two. However, for external observers, even if multiple tasks are serially concurrent, it can result in the illusion of parallel execution between multitasking. True parallelism can only occur in systems that have multiple CPUs (such as multicore CPUs).

Critical section

A critical section is used to represent a common resource or shared data that can be used by multiple threads. But each time, only one thread can use it, and once the critical section resource is occupied, other threads must wait for the resource to be used.

Blocking (Blocking) and non-blocking (non-blocking)

Blocking and non-blocking are often used to describe the interaction between multiple threads. For example, if a thread occupies a critical section resource, all other threads that need this resource must wait in this critical section. Waiting causes the thread to hang, which is blocking. The non-blocking meaning, in contrast, emphasizes that no one thread can interfere with the execution of other threads. All threads will try to continue to execute forward.

Deadlock (Deadlock), hunger (starvation) and live lock (Livelock)

deadlocks, starvation, and live locks are all multi-threaded active issues. If you find a few of the above, then the related thread may be hard to carry on.
A deadlock is a number of threads that occupy the resources of other threads with each other, and if everyone is unwilling to release their resources, then this state will last forever. Hunger is still likely to be resolved over a period of time (such as high-priority threads that have completed their tasks and are no longer frantically executing), compared to deadlocks. A live lock is an interesting situation, such as when you take the elevator downstairs, the elevator comes, the door opens, and you are ready to go out. But unfortunately, a person outside the door in your way, he wants to come in. Then you are comity in the same direction, and as a result, you both hit again ...

Level of concurrency

Because of the existence of critical regions, concurrency between multithreading must be controlled. According to the strategy of controlling concurrency, we can classify the concurrency level, which can be divided into blocking, no hunger, no obstacle, no lock, no waiting.

Blocking (Blocking)

One thread is blocked, the current thread cannot continue until the other thread frees the resource. A blocked thread is created when we use the Synchronized keyword, or when we re-enter the lock. Either the synchronized or the re-entry lock will attempt to get a lock on the critical section before executing the subsequent code, and if not, the thread will be suspended until the required resources are occupied.

No Hunger (Starvation-free)

This depends on whether there is a priority between threads if the system allows high-priority threads to queue. This can cause starvation in low-priority threads.

Barrier-free (obstruction-free)

Barrier-Free is one of the weakest non-blocking schedules. Two threads if it is a barrier-free execution, then they will not cause one party to be suspended because of a critical area problem. In other words, everyone has entered the critical section. What if we modify the shared data together and change the data to a bad place? For a barrier-free thread, once this is detected, it immediately rolls back the changes it has made to ensure data security. If the control mode of blocking is pessimistic, the relatively non-blocking scheduling is an optimistic strategy.
As you can see from this strategy, when there is a serious conflict in the critical section, all threads may continually roll back their operations, and no thread can get out of the critical section. This situation can affect the normal execution of the system.
A feasible barrier-free implementation can be achieved by relying on a "consistency token". The thread reads and saves the token before it is manipulated, reads it again after the operation completes, checks to see if the token has been changed, and if the two are consistent, there is no conflict in resource access. If not, the resource may conflict with other write threads during the operation and needs to retry the operation. Any thread that modifies the resource requires that the consistency tag be updated before the data is modified to indicate that the data is no longer secure.

No Lock (Lock-free)

Lock-free parallelism is barrier-free. In the case of lock-free, all threads can try to access the critical section, but the difference is that the lock-free concurrency guarantee must have a thread that can complete the operation within a finite step and leave the critical section, a typical feature of which may include an infinite loop. In this loop, the thread keeps trying to modify the shared variable. If the modification succeeds, the program exits, otherwise attempts to modify it. But in any case, a lock-free parallel always guarantees that a thread can win. As for the threads in the critical section that failed to compete, they kept retrying until they won. If you always try unsuccessfully, a hunger-like phenomenon will occur and the thread will stop.

No Wait (wait-free)

No lock requires only one thread to complete the operation within a finite step, while no waiting is further extended on a lock-free basis. It requires all threads to be completed in a limited number of steps, so that they do not cause hunger problems. If you limit the upper limit of this step, you can further decompose the no wait to have no waiting and the number of threads regardless of the few, the difference between them is only a different limit on the number of cycles. A typical no-wait structure is RCU (read-copy-update). Its basic idea is that the data can be read without control. As a result, all read threads are no longer waiting, and they are neither locked in wait nor cause any conflicts. But when writing the data, get a copy of the original data, and then just modify the copy data (this is why the read can not be controlled), after the completion of the modification, the appropriate time to write back the data.

Basic concepts and levels of Java concurrency

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.