A summary of Java concurrency concurrent package

Source: Internet
Author: User
Tags cas mutex semaphore visibility volatile

Concurrency principle

single-core system : Threads alternate execution, because of the alternating fast and many, giving people a feeling of simultaneous execution
multicore Systems : Not only can you alternately execute threads, but you can overlap execution threads
Supplemental : The concurrency referred to in this chapter mainly refers to the concurrency between threads

Common concurrency mechanisms

concurrency mechanisms for different systems

    • UNIX: Pipelines, messages, shared memory, semaphores, signals
    • Linux kernels: Atomic operations, spin locks, semaphores, barriers (since servers are generally located on Linux servers, this is the most important thing to know)
    • Solaris thread Synchronization Primitives: mutexes, semaphores, multi-reader/single-writer locks, condition variables
    • Windows: Wait functions, dispatcher objects, critical sections, lightweight read and write locks, and condition variables

Mutually exclusive requirements

    • forced mutex : When a critical section is shared, only one thread is allowed to enter the critical section at a time, that is, the mutex must be enforced
    • no interference : a thread that stops in a non-critical area cannot interfere with other threads, including critical and non-critical sections
    • Disallow infinite delay : Never allow a situation where a thread requiring access to a critical section is indefinitely delayed, such as creating a deadlock or starvation
    • available for entry: When no thread is in the critical section, any thread that needs to enter the critical section must be able to enter immediately
    • No number of cores : no requirements and limitations on the execution speed and number of processors for the associated thread
    • Limited Time : The time a thread resides in a critical section must be limited

Mutually exclusive scenarios

    • Hardware support : The benefit of a mutually exclusive directive supported by the processor is that it can reduce overhead, but it is difficult to become a common solution
    • system or language level support : This level of mutex support is provided by the operating system or program language, such as semaphores, enhancement, message passing, etc.
    • Software method Support : These methods are usually based on the assumption that the basic mutex when accessing memory, although the order of allowing access is not specifically scheduled in advance, but simultaneous access to the same address in memory by the memory arbiter serialization execution, that can understand the method of solving the mutex problem, such as Dekker algorithm, Peterson algorithm

* Dekker algorithm

/*** Dekker algorithm Basic constraints: * A time to a memory place only one visit * 1. Flag is set as two threads enter the critical section of the key, when one thread fails, the other is still accessible *-each thread can only change its own flag, can only check the other thread's F Lag and cannot change *-When a thread enters a critical section, it is necessary to periodically check another thread flag until the other thread is not in the critical section *-When the threads enter the critical section, they should immediately set their flag to true, indicating that the occupied critical section *-When the thread leaves the critical section, should Set the flag to false immediately to indicate the release of the critical section * 2. Set turn to arrange the access order of the critical section, the access thread must read the turn value repeatedly until it is allowed to enter the critical section *-When the turn value equals the thread number, the thread can enter its critical section *-otherwise, the thread Must be forced to wait (busy wait or spin wait)*/ Public classDekker {//observe the status of two threads    Boolean[] flag = {false,false}; //the rotation of the critical section access rights, the initial right to P1--to arrange the execution order to avoid the live lock problem caused by humility    intturn = 1;  Public voidP0 () { while(true){            //set the flag of P0 to true while checking the flag of P1Flag[0] =true;  while(flag[1]){                //when the critical section is not available, determine whether the current critical section permission is P1                if(Turn = = 1) {//for handling live lock problems//When the critical section permission is P1, you need to set P0 to false so that P1 can enter the critical section to handle the deadlock problemFlag[0] =false; //loop Check the permissions of the turn (null spin) until P1 execution completes the permission to P0                     while(Turn = = 1){                        /**Do nothing empty spin **/} flag[0] =true;//at this time the P1 should have been completed, the P1 should be forbidden to enter the critical area                }            }            //when the flag of P1 is false, the P0 can enter the critical section immediately.            /**critical section critical area **/            //when the critical section is finished, turn is set to 1, and the critical section access is swapped to P1//The P0 flag is set to False to release the critical section, allowing the P1 to enter the critical sectionturn = 1; flag[0] =false; /**Do otherthings **/        }    }     Public voidP1 () { while(true){            //set the flag of P1 to true while checking the flag of P0FLAG[1] =true;  while(flag[0]){                //when the critical section is not available, determine whether the current critical section permission is P0                if(turn = = 0) {//for handling live lock problems//When the critical section permission is P0, you need to set P1 to false so that P0 can enter the critical section to handle the deadlock problemFLAG[1] =false; //loop Check the permissions of the turn (null spin) until P0 execution completes the permission to P1                     while(Turn = = 0){                        /**Do nothing empty spin **/} flag[1] =true;//at this time the P0 should have been completed, the P0 should be forbidden to enter the critical area                }            }            //when the flag of P0 is false, the P1 can enter the critical section immediately.            /**critical section critical area **/            //when the critical section is finished, turn is set to 0, and the critical section access is swapped to P0//also set the flag of P1 to False to release the critical section, allowing the P0 to enter the critical sectionTurn = 0; flag[1] =false; /**Do otherthings **/        }    }     Public Static voidMain () {/**Concurrent Execution P0 P1 readers are interested to verify their own **/    }}

* Peterson algorithm

/*** The Peterson algorithm is much simpler than the Dekker algorithm and is easily generalized to multiple threads * 1. Mutex protection verification: P0 Angle *-When PO is set to Flag[0]=true, P1 cannot enter the critical section *-When P1 has entered the critical section, And FLAG[1]=TRUE,P0 cannot enter the critical section. * 2. Avoid mutual blocking verification: P0 Angle *-When P0 is blocked in the while loop, Flag[1]=true and Turn=1 *-When flag[1]=false or turn= 0, at this time P0 can enter the critical section * 3. Complexity: This algorithm reduces the complexity of concurrent mutexes by simply alternating into critical regions .*/ Public classPeterson {Boolean[] flag = {false,false};//indicates the location of each mutex thread    intturn = 0;//Resolve simultaneous conflicts     Public voidP0 () { while(true) {flag[0] =true; //each time you explicitly set turn=1 and act as a while null spin condition, forcing other threads to have access to the critical section//This is also a simple solution to the mutual exclusion, in turn, you can not repeat the exclusiveturn = 1;  while(Flag[1] && turn = = 1){                /**Do nothing empty spin **/            }            /**critical section critical area **/Flag [0] =false; /**Do otherthings **/        }    }     Public voidP1 () { while(true) {flag[1] =true; Turn= 0;  while(Flag[0] && turn = = 0){                /**Do nothing empty spin **/            }            /**critical section critical area **/Flag [1] =false; /**Do otherthings **/        }    }     Public Static voidMain () {/**Concurrent Execution P0 P1 readers are interested to verify their own **/    }}

Signal Volume

    • rationale : N Threads can work with simple signals so that a thread can be forced to stop at a certain location until it receives a special signal. Any complex cooperation needs can be satisfied with the appropriate signal structure
    • Components :
      • In order to signal, it is necessary to use a special variable sem called a semaphore, which is usually initialized to a non-negative number
      • To transmit a signal through the semaphore SEM, the thread can perform the primitive semsignal (SEM): At this time the semaphore sem+1, when the SEM is less than or equal to 0, the thread blocked by the semwait is blocked
      • In order to receive the signal through the semaphore SEM, the thread can perform the primitive semwait (SEM): At this time the semaphore sem-1, when the SEM becomes negative, the thread executing semwait is blocked, otherwise the thread continues execution
    • Classification : Whether it is a count semaphore or a two-dollar semaphore, you need to use the queue to hold the process/thread waiting on the semaphore, which will need to decide in what order the process will be removed from the queue
      •   strong semaphores : Semaphores using FIFO first-out fairness policy (that is, the most blocked process/thread is first freed by the queue) (common)
      •   weak semaphore : There is no requirement that the process/thread remove sequential semaphores from the queue
    • Add : Two yuan the difference between the semaphores is only 0 and 1 of the SEM values.

* Signal Volume implementation (CAS version)

/*** Design principle: Only one thread can control a semaphore with wait and signal operations at any time * requirements: semwait and Semsingal operations must be implemented as atomic primitives * Semaphore semaphores (hereinafter referred to as SEM) properties * Flag : Indicates whether the semaphore is available, the default is 0 * Count: * When >=0, indicates the number of threads that can execute semwait without being suspended * when <0, the number of threads holding the waiting queue that is suspended in semaphores * Queu E: Indicates the semaphore is associated with the waiting queue, the blocked thread needs to be placed in the queue * PS: Here we use the Boolean version of CAs*/semwait (SEM) {//when Sem.flag is found not to be 0 o'clock, spin the wait until it is 0//One thing to add: busy waiting can ensure synchronization of queue operations,//but because wait and signal execution time is short, its overhead is still very small     while(!compare_and_swap (sem.flag,0,1)); Sem.count--; if(Sem.count < 0){        /**the thread enters the Sem.queue wait queue and is blocked **/} sem.flag= 0;} Semsignal (SEM) {//when Sem.flag is found not to be 0 o'clock, spin the wait until it is 0     while(!compare_and_swap (sem.flag,0,1)); Sem.count++; if(Sem.count <= 0){        /**removed from the sem.queue wait queue, the thread being moved into the ready queue **/} sem.flag= 0;}

* Semaphore Implementation Mutex

Final intn =/**Number of Threads **/ints = 1;//Semaphore Public voidPinti) {     while(true) {semwait (s); /**Critical Zone Critical Zone **/semsignal (s); /**Do other parts **/    }}

Messaging (a familiar concurrency mechanism, and of course a special introduction to MQ, such as Rabbit MQ)

1. Overview of Message Delivery

    • message definition : Messaging refers to communicating between threads by sending a message
    • message implementations : typically provide a pair of primitives to implement send (Destination,message), receive (Source,message)
    • Send message : A thread sends a message to another specified target destination thread in the form of a message massage
    • Receive Message : The thread receives a message from the source thread by executing the recieve primitive massage

2. Message structure

    • Message Type : The specified message type, in which the receiver tends to listen and capture messages based on that type
    • Target ID: The identifier of the sender/source
    • message Length : Total length of the entire message, note to control length
    • Control Information : Additional information, such as pointers to creating message lists, number of messages passed between records source and destination, order and ordinal, and priority
    • message content : message body, equivalent to body
    • Supplement : readers can participate in the package format of various protocols in ISO, such as HTTP and TCP packets

3. Message Communication conditions

    • Send: Either the sending thread is blocked until the message is received by the target thread, or it does not block
    • Receive:
      • If a message has been sent before it is received, the message is accepted by the target thread and continues to execute
      • If there is no message waiting, the target thread is blocked until the waiting message arrives, or the thread continues execution, discarding the receive

Concurrent Package Structure

Concurrent Package Overall class diagram

Concurrent package Implementation Mechanism

    • Summary : In the overall contract design, Doug Lea Master adopted a three-layer structure of the 3.1 concurrent package architecture.
    • Add : And the package involved in the content of the author will be introduced in succession to explain, please look forward to (progress depends on the author's busy degree)

1. Bottom-Hardware command support

    • Summary : And the bottom of the contract is dependent on hardware-level volatile and CAS support
    • volatile: Use volatile memory read-write semantics and prevent reordering to ensure data visibility
    • CAS: Efficient machine-level atomic instruction with CAS guarantees the atomicity of read-write operations for memory execution
    • combination : Borrowing volatile variables for read/write and CAs to implement effective communication between threads, ensuring atomicity, visibility, and ordering

2. Middle tier-basic data structure + algorithm support

    • Summary : In the design and use of data structures and algorithms, Doug Lea has specifically designed the AQS framework as the concurrency basis for all concurrent class libraries, while introducing non-blocking and atomic variable classes to enhance concurrency characteristics
    • AQS Framework : AQS provides the most basic and effective concurrency APIs, and Doug Lea expects it to be the underlying solution for all concurrent operations, and most implementations in the contract are dependent on Aqs (Abstractqueuedsynchronizer) , while Aqs is based on the underlying support of CAS and volatile
    • non-blocking data structure : Non-blocking data structure is the basis of design of nonblocking queue, and it is also an important basis for reference comparison of blocking queue.
    • Atom Variable class : Doug Lea Master specifically designed a special class library for all the atomic variables, even in the latter, such as Longadder, Longaccumulator, etc., from the side can reflect the importance of numerical operations for programming

3. High-level-concurrent class library support

    • Summary : Doug Lea Master has provided a rich concurrency class library in the bundle and has greatly facilitated the fast and secure use of concurrent operations
    • Lock: The Lock interface defines a series of concurrency operating standards, see AQS frame lock
    • Synchronizer : The implementation of the Synchronizer for each concurrency class relies on aqs (inheritance), such as sync in Reentrantlock, and the author will concurrently be within the scope of the Synchronizer
    • blocking queues : As the name implies, queues that support blocking, primarily classes that end in a queue
    • Actuator : The so-called actuator, refers to the task of the performer, such as the thread pool and Fork-join
    • Concurrent containers : containers that support concurrency, mainly containing cow and classes beginning with concurrent, usually concurrent containers are non-blocking

PS: Thanks to Chih Kira's friendship sponsorship, continue to refuel to write better articles, subsequent articles will continue to improve the iteration ~

A summary of Java concurrency concurrent package

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.