The principle of Concurrenthashmap
Storing the data for a period of time and assigning a lock to each piece of data, assigning a lock to each segment when the thread accesses the data, while other segments of the data can be accessed by other thread data
2) Structure of the Concurrenthashmap
Concurrenthashmap consists of a segament array and a hashentry array structure, segament is a reliable re-entry lock, which plays the role of lock, Hashentry is used to store key value pairs of data, A segament contains a hashentry array, each hashentry is an element of a linked list structure, each segament guarding an element in a hashentry array, and when the data of the Hashentry array is modified, The lock that corresponds to it must be obtained first
Initializing the segament array
if (concurrencylevel>max_segament) {
Concurrencylevel=max_segament;
}
int sshift=0;
int sszie=1;
while (Ssize<concurrentcylevel) {
++sshift;
ssize<<=1;
}
Segamentshift=32-sshift;
segamentmask=ssize-1;
This.segaments=segament.newarray (ssize);
Ssize is the number of the nearest concurrencylevel of the 2 n-th square.
Segamentshift to segment offset
Segamentmark is a segment mask
2 positioning Segament: In the insertion or acquisition of elements, the hashcode must be hashed through the hashing algorithm two times, the reason two hashing is to reduce the hash conflict, so that the elements can be evenly distributed on different segament, the worst case, All the values are hashed on a segament.
The pseudo-code to calculate the Segament location is as follows:
Final segament<k,v> segamentfor (int hash) {
return Segaments[hash>>>segamentshift&segamentmask]
}
Concurrenthashmap operation: Get put size
The get pseudo-code is as follows:
Public v Get (Object hashcode)) {
The function is to hashcode two hashes, because of the efficiency requirements, so the method of using displacement algorithm
int hash = hash (hashcode);
return Segamentfor (hash). get (Key,hash);
}
The efficiency of the get operation is not to lock when reading, unless read to null value, re-lock, then how to do not lock? The main approach is to use the Volatitle variable to make it visible between threads. Can be read at the same time, and guaranteed not to read the expired value, but can only be single-threaded write, according to the happen-before principle of volatitle write priority to read
The algorithm for locating Hashentry and Segament is the same, but the values are different
Two methods are as follows
hash algorithm used in locating segament of Hash>>>segamentshift&segamentmarsk
int index=hash& (TAB.LENGTH-1)//algorithm for locating hashentry
The aim is to hash both segament and hashentry.
Put operation:
Put needs to write to shared variables, so locks are required for thread safety
Step 1) Determine if expansion is required
Note: Capacity expansion after HashMap line insertion may result in invalid expansion
Concurrenthashmap the first expansion in the insert.
Expansion mode: Hashentry array to the original twice times, re-hash after the insertion, and only the segament will be expanded, not all expansion.
Size operation
Safe way to count size: Lock variables when put clean Remove method, but this is very inefficient
The Concurrenthashmap is used in a practical modcount variable when a put clean remove operation is performed, the variable +1 is then not equal when the statistics size is compared before and after, so that the container is not changed
Concurrentlinkedqueue Non-blocking queue
Queues are implemented in two ways, blocking queues and non-blocking queues, blocking queues are blocking algorithms, and even if they are implemented with locks, non-blocking queues are implemented using CAS loops
Concurrentlinkedqueue is a kind of unbounded queue, which is sorted by FIFO method, and Wait-free algorithm is adopted.
The simplest algorithm
Offer (into queue) algorithm:
Public Boolean offer (E e) {
Node node = new node (e);
while (true) {
Node Tail=gettail ();
if (Tail.getnext (). Casnext (Null,node) &&node.castail (Tail,node)) {
return true;
}
}
}
Out of the queue personal simplified pseudo-code may not be accurate welcome correct
Public e poll () {
while (true) {
Node head = GetHead ();
CurrentNode = Head.getnode ();
Replace the head node with the next node
if (Head.cas (Currentnode,currentnode.getnext ())) {
Cut off references
Currentnode.setnext () =null;
return current;
}
}
Note that the next node can be implemented using Atomicreference.
This enables CAS operations to be implemented
blocking queues
The blocking queue is a queue that supports two additional operations, both of which support blocking insertions and removal
1) Blocking insert: When the queue is full, the queue blocks all insert operations until the queue is dissatisfied
2) Blocking removal: When the queue is empty, the queue blocks all removal operations until the queue is not empty
Blocking queue common terms producer and consumer scenarios, the producer is the thread that adds elements to the queue, the consumer is the thread that takes the elements from the queue, and the blocking queue is the container that holds the elements.
Four ways to handle queue blocking
1) Throw exception
2) return special value
3) Always blocked
4) Timeout exit
Common blocking queues in Java
1) Arrayblockingqueue: is a bounded blocking queue implemented with arrays, sorted by FIFO principle
2) Linkedblockingqueue: A bounded blocking queue implemented with a linked list the maximum length of this queue is Integer.max_value value
3) Priorityblockingqueue: Sort by Compartor method or CompareTo method
4) Delayqueue: A unbounded blocking queue that supports deferred acquisition elements, using the Pritorityqueue queue to implement the delay interface
Delayqueue is very useful.
1) Cache design: You can use Delayqueue to save the cache element's validity period, use a thread to loop query Delayqueue, when the element is found, indicates that the cache expires
2) Scheduled Task scheduling: Use Delayqueue to save the task and execution time that will be performed on the day, and execute when the task is queried from Delayqueue.
How to implement the delay interface
1) class when the object is created
2) Implement the Getdelay method and return the time required for execution
3) Implement the CompareTo method to put the longest return time on the last side
4) Linkedblockingdqeque: A two-way blocking queue consisting of a list of links that can be used in work-stealing mode
Blocking Queue Implementation principle:
1) Use notification mode to implement, mainly through the condtion way to achieve queue blocking
The pseudo code is as follows:
Lock lock = new Retreenlock ();
Private final Condition Notempty =lock.newcondition ();//indicates that you can get
Private final Condition notfull = Lock.newconditon ();//indicates that you can insert
Insert method
public void put (E e) {
Final Retreenlock lock =this.lock;
Lock.lockinterrupt ();
try{
while (E.length=max_count) {
Notfull.await ();
}
Insert (e);
}
}finally{
Lock.unlock ();
}
private void Insert (E e) {
Items[putindex]=e;
Putindex=inc (e);
++count;
Notempty.signal ();//indicates that the take thread can get
}
Get method
Public E take () {
Final Reentrantlock lock = This.lock;
Lock.interruptibly ();
try{
while (item.length==0) {
Indicates that the queue is empty and the thread is blocked
Notempty.await ();
}
return extract ();
}finally{
Lock.unlock ();
}
}
Fork/join Frame
The Fork/join framework is a new feature introduced in JDK 1.7 for a parallel execution of the task framework
Tips: Parallelism and concurrency
The essence of concurrency is that a physical CPU (or multiple physical CPUs) can be multiplexed between several programs, and concurrency is forcing multi-user sharing for limited physical resources to improve efficiency.
Parallelism refers to the occurrence of two or more two or more events or activities at the same time. In a multi-channel program environment, parallelism enables multiple programs to execute simultaneously on different CPUs at the same time.
principle: Fork is a large task split into a number of small tasks, join is to merge these small tasks obtained results. Finally get the result of this big task.
Work-stealing algorithms (working-stealing)
is a thread that gets a task from another queue for execution, the scenario: If a large task a splits into separate subtasks into different queues, when the tasks in some queues are executed, and some of the tasks in the queue are not completed, The thread that executes the queue then executes the task of the other queue that has the task, (for example, thread X discovers that the task in queue A is gone, then executes the task in queue B)
The advantage of doing this is to take advantage of parallel computing between threads to reduce inter-thread contention
The downside to this is that there are situations where competition exists. If there is only one task in the task queue. and consumes more system resources, such as creating a queue, creating a thread pool
Note: In order to reduce competition, a double-ended queue is usually used, which is obtained from the tail of the team. Normal thread gets from head
Fork/join has two main working steps
1) Split task
2) Perform tasks and merge results
Recursiveaction to perform tasks that do not return results
Recursivetask for performing tasks with returned results
Forkjointask needs to be performed by Forkjoinpool.
How to use:
First inherit recursivetask by overriding the compute method to split the task and fork execution, join merge
The Main method uses
public static void Main (string[] args) {
Create a thread pool
Forkjoinpool pool = new Forkjoinpool ();
Create a Master task
Counttask task = new Counttask ();
Perform tasks
future<integer> result = Pool.submit (Task);
System.out.println (Result.get ());
}
Exception handling code
Whether the exception processing is complete
if (task.iscompletedabnormally ()) {
Sysmte.out.println (Task.getexception ());
}
Implementation principle of Fork/join framework
1) Fork Implementation principle
Call Puttask into a task with an asynchronous call
Public final forkjointask<v> fork () {
((Forkjoinworkthread) thread.currentthread). Puttask (this);
}
2) Puttask the current task into the task array and then calls Forkjoinpool's signalwork to wake up or create a new thread
The pseudo code is as follows
Public final void Pushtask (Forkjointask<> t) {
Forkjointask <> [] q;int s,m;
if (q=queue)!=null) {
Calculate the offset
Long U = ((s=queuetop) & (m=queuetop.length-1)) <<ASHIFT+ABASE;
Flush main memory directly based on offset
Unsafe.putorderobject (q,u,t);
queuetop=s+1;
if (s-=queuebase<=-2) {
Waking worker Threads
Signalwork ();
}else{
Create a new thread
Growqueue ();
}
}
}
1) The Join method principle, the first view of the task when the completion of the execution, if completed, directly return to the completion status
2) If not completed, the task is executed in parallel from the array, and if normal, returns normal if the exception returns exception
Returns the result if normal
This article is from the "ITER Summary" blog, please be sure to keep this source http://5855156.blog.51cto.com/5845156/1959758
The art of Java concurrent programming, reading notes, chapter sixth, Concurrenthashmap, and the introduction of concurrent containers