Java High concurrency one: preface _java

Source: Internet
Author: User

1. Several important concepts about high concurrency

1.1 Synchronous and asynchronous

First of all, synchronous and asynchronous refers to the function/method invocation aspect.

Obviously, the synchronous call waits for the return of the method, and the asynchronous call returns instantaneously, but the instant return of the asynchronous call does not mean that your task is done, and he will continue the task in the background with a thread.

1.2 Concurrency and parallelism

Concurrency and parallelism are similar in outward appearances. As shown in the diagram, parallelism is two tasks at the same time, and concurrency, then, is a task that one will do and then switch to another task. So a single CPU can not do parallel, can only be concurrency.

1.3 Critical Area

A critical section is used to represent a public resource, or to share data, that can be used by multiple threads, but each time only one thread can use it, and once the critical area is occupied, other threads must wait for the resource to be used.

1.4 Blocking and Non-blocking

Blocking and non-blocking often describe the interaction between multithreading. For example, if a thread occupies a critical area of resources, all other threads that need this resource must wait in this critical section, waiting to cause the thread to hang. The situation is blocking. At this point, if the resource-consuming thread has been unwilling to release the resource, all other threads that are blocked on this critical section will not be able to work.

Non-blocking allows multiple threads to enter a critical section at the same time

So the blocking way, the general performance will not be too good. According to general statistics, if a thread is suspended at the operating system level and has a context switch, it usually takes 8W of time to do this.

1.5 deadlock, starvation, live lock

The so-called deadlock: refers to two or more than two processes in the implementation process, because of competitive resources or due to the communication between each other caused by a blocking phenomenon, without external forces, they will not be able to push down. It is said that the system is in a deadlock state or the system produces a deadlock, and these processes that are always waiting for each other are called deadlock processes. Like the car in the picture below wants to move forward, but no one can move forward.

But the deadlock is a bad phenomenon, but it is a static problem, once the deadlock, the process is stuck, the CPU share is 0, it will not occupy the CPU, it will be paged out. It is relatively good to find and analyze.

The corresponding to the deadlock is the live lock.

A living lock means that 1 of things can use resources, but it allows other things to use resources first, things 2 can use resources, but it also let other things first use resources, so both have been humility, can not use resources.

For example, just as you meet someone on the street, just as he walks in the opposite direction and meets you head-on, you all want to get past each other. You move to the left, and he moves to the left, and neither of them can pass. Then you move to the right, and he moves to the right, so the loop goes on.

When a thread obtains a resource, it discovers that other threads also think of the resource because it does not have all the resources to give up the resources it holds in order to avoid deadlocks. If another thread does the same thing, they need the same resources, such as a holding a resource, B holding B resources, after giving up the resources, A has obtained the B resources, B has obtained a resources, so repeated, then had the live lock.

Live locks are more difficult to find than deadlocks, because active locks are a dynamic process.

starvation is the inability of one or more threads to get the resources they need for a variety of reasons, leading to a failure to execute.

1.6 Concurrency Level

Concurrency level: Blocking and non-blocking (non-blocking divided into accessibility, no locks, no wait)

1.6.1 Blocking

When a thread enters a critical section, other threads must wait

1.6.2 Barrier Free

    1. Barrier-Free is one of the weakest non-blocking scheduling
    2. Free access to critical areas
    3. When there is no competition, the operation is done within the limited step
    4. When there is competition, roll back the data

Compared to non-blocking scheduling, congestion scheduling is a pessimistic strategy, and it is thought that it is very likely that data will be modified by changing data together. Rather than congestion scheduling, it is an optimistic strategy, and it thinks that changing the data does not necessarily make the data bad. But it is a strategy of wide and strict, when it finds a process in the critical area of data competition, resulting in conflict, then the barrier-free scheduling method will roll back the data.

In this barrier-free scheduling, all threads are equivalent to taking a current snapshot of a system. They will always try to take the snapshot is valid so far.

1.6.3 without lock

is barrier-free.

Guaranteed to have a thread to win

Without obstacles, there is no guarantee that the operation will be complete when there is competition, because if it finds that every operation has a conflict, it is constantly trying. If the threads in the critical area interfere with each other, it will cause all threads to die in the critical area, then the system performance will have a significant impact.

Instead of locking adds a new condition to ensure that each competition has a thread that can win, then solves the barrier-free problem. At the very least, all threads are guaranteed to execute smoothly.

The following code is typical of a code that is not locked in Java

No locks are common in Java

while (!atomicvar.compareandset (Localvar, localvar+1)) 
{ 
 Localvar = Atomicvar.get (); 
}

1.6.4 No Waiting

Lock-Free

Requires that all threads must be completed within a finite step

No hunger.

First of all, there is no waiting for the premise is not a lock based on the lock it only guarantees that the critical section must have a positive, but if the priority is high, then some of the lower priority in the critical region of the thread may be hungry, has not been able to go out of the critical zone. Then there is no waiting to solve the problem, it ensures that all threads must be done within a finite step, naturally without starvation.

No wait is the highest level of parallelism, which enables the system to reach its optimal state.

Typical cases with no wait:

If there are only read threads, no thread threads, then this is bound to be no wait.

If you have both a read thread and a write thread, and before each write thread, copy the data, and then modify the copy, instead of modifying the original data, because there is no conflict in modifying the copy, then the process of this modification is not waiting. The last thing you need to sync is just overwrite the original data with the finished data.

Because there is no waiting for a higher, more difficult to achieve, so the use of locks will be more extensive.

2. Two important laws on parallelism

Both of these laws are related to the acceleration ratio.

2.1 Amdahl Law

The calculation formula and theoretical limit of the acceleration ratio of the serial system after parallelization are defined.

Acceleration ratio definition: acceleration ratio = system time consuming/optimized before system time consuming

As an example:

Acceleration ratio = system time consuming/optimized before system time =500/400=1.25

This theorem shows that: increasing the number of CPU processors does not necessarily play an effective role in improving the system can be parallel to the proportion of modules, reasonable increase in the number of parallel processors in order to minimize the input to get the maximum acceleration ratio.

2.2 Gustafson Law

Describes the relationship between the number of processors, the serial ratio and the acceleration ratio

The acceleration ratio =n-f (n-1)//derivation process is slightly

As long as there is enough parallelism, the acceleration ratio is proportional to the number of CPUs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.