The book of those years, "The Art of Java concurrent Programming", the challenge of concurrent programming and the underlying implementation principle of concurrency mechanism

Source: Internet
Author: User
Tags cas switches volatile

One, concurrent programming Challenge 1, Context switch
(1) Issues with context switching

Providing strong parallelism on the processor makes it possible to program concurrency. The processor realizes the illusion of program parallelism by assigning different time slices to different threads for automatic scheduling and switching of thread execution.

In a single thread: Threads save serial execution, and context switches between threads do not cause significant performance overhead.

In multi-threading: Frequent scheduling between threads requires context switching to hold context information for the current thread of execution and context information to load the thread that will be executed, while context switching requires the support of the underlying processor, operating system, and Java Virtual machine to provide a much more performance-intensive

Pin. If the frequent context switching is bound to affect the speed of program execution.

(2) How to view the cost of context switching

1) Use the profiling tool LMBENCH3 to measure the duration of the context switch

2) Use Vmstat to measure the number of context switches

3) Use the Jstack command to view the dump information for the thread

(3) How to reduce context switching

1) No lock concurrent programming: Multi-threaded competition lock causes context switching, using lock-free programming to reduce context switching by avoiding the use of locks

2) CAS algorithm: CAS operation using automic atomic classes in Java

3) Reduce the number of threads: fewer threads, fewer context switches

4) Single thread: Implement task scheduling and execution in single thread

2. Deadlock
(1) How to avoid deadlocks

3. Challenges of resource constraints
(1) Limitations of hardware resources and methods of treatment

The hardware resource limit has the bandwidth upload download speed, the hard disk reads and writes the speed, the CPU processing speed

Approach: Clustering

(2) Limitations of software resources and methods of treatment

The software resource limits the number of connections to the database, and the number of sockets connected.

Treatment: Sharing software resources is the reuse of software resources

Second, the underlying implementation principle of concurrency mechanism 1, volatile and synchronized realization principle and difference
1) The difference between volatile and synchronized
Key words Functional characteristics Implementation principle Advantages Disadvantages Scope of application Description
volatile 2, Reordering of forbidden Directives
3, JVM built-in properties
by memory barrier Directives prevent reordering and cache consistency sniffing technology Implementing Memory Visibility 2, without causing thread context switching and scheduling 1, only a single thread updates the value of the variable
2, the variable does not have a dependency on other variables (that is, does not make up the invariant of the instance object together)
3, access the variable without locking the
The three conditions are met at the same time to be used.
1, volatile write operations are more expensive than volatile read operations.
Synchronized 1. Realization of memory visibility
2. Ensure the mutex and atomicity of the operation
3. Reordering of Forbidden commands
4. JVM built-in properties
Locking mechanism 1. Ensure the visibility of the memory
2. Ensure the atomicity of the included area operations
1. High performance overhead when serious threads are competing
2, locking mechanism will cause thread context switching and scheduling
Any place that requires a synchronization mechanism can be used 1, synchronize the same read and write operation overhead
2) Memory Barrier: Instructions to limit the sequence of memory operations

Memory barrier directive Lock: Prohibit reordering, use cache lock or main memory lock to ensure that the processor cache (the thread's working memory) data is written to the main memory.

Application:

When a volatile variable is written, the compiler appends a lock instruction to the code after the compiled instruction to prevent reordering

3) Sniffer technology: Implementing Buffer Consistency Protocol

Caching-based consistency protocol, sniffer technology ensures that the cache of each processor and the data between the main memory are kept consistent. Each processor monitors the data transmitted by the bus through sniffing techniques to detect the failure of its own cached data. The processor needs to reload the failed data from the main memory into the processor's cache.

4) Type and status of the lock
lock status action description
no lock status initial value of lock state  
biased lock status thread fetching , set the object header in Mark work to the ID of the current thread, enabled by default, reducing the cost of lock acquisition
lightweight lock state When the object head mark work within the bias thread ID is not set to the current thread ID at the present thread ID  
heavyweight lock status  
lock Type advantages cons applicable range
biased lock locking, unlocked unwanted amount External consumption thread competition when there is additional lock cancellation consumption Only one thread accesses the scene of the synchronization block
lightweight lock competing threads do not block and increase program responsiveness The thread that gets the lock does not spin waiting to consume CPU Pursuit response Time
Synchronization block code execution Fast
heavyweight lock thread race blocking, no spin consumption CPU thread blocking, slow response time Maximum throughput
Synchronization block code execution is longer
2, the realization principle of atomic operation

The so-called atomic operation is an indivisible minimum unit of execution operation

(1) How the processor implements atomic operations

1) Bus lock

2) Cache Lock

(2) How Java implements atomic operations

1) CAS algorithm

2) Locking mechanism

The book of those years, "The Art of Java concurrent Programming", the challenge of concurrent programming and the underlying implementation principle of concurrency mechanism

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.