Java concurrency Programming: The fifth chapter----basic building Blocks

Source: Internet
Author: User
Tags closure semaphore throwable concurrentmodificationexception

Delegates are one of the most effective strategies for creating thread-safe classes: Just let the existing thread-safe classes manage all the states.

First, the Synchronization container class

1, the problem of synchronous container class

The synchronization container classes are thread-safe, and the composite operations built into the container itself guarantee atomicity, but when the client-side compound operation on it requires additional locking to protect its security

Because the synchronization container class adheres to the synchronization policy, the client lock is supported, but the same lock must be added clearly

2, Iterators and Concurrentmodificationexception

  timely failure mechanism : A Concurrentmodificationexception exception is thrown when the container is modified during the iteration

WORKAROUND: Lock or create a replica

3. Hidden iterators

Some hidden iteration actions: hashcode, Equals, Containsall, RemoveAll, Retainall, etc.

Second, concurrent containers

The synchronization container enables serialization of container state access to ensure thread safety, but this approach severely reduces concurrency. Replacing synchronous containers with concurrent containers can greatly improve scalability and reduce risk.

Blockingqueue expands the queue to enable blocking insertions and fetches concurrenthashmap instead of HashMap

1, Concurrenthashmap

Not all locks are made on each method so that only one thread can access the container, that is, no exclusive access is implemented . Instead, a finer-grained locking mechanism is used to achieve large-scale sharing , a mechanism called segmented locks (lock Striping)

The Concurrenthashmap iterator does not throw concurrentmodificationexception, so it does not need to be locked during the iteration because its returned iterator has weak consistency rather than a "timely failure".

Concurrenthashmap has weakened some operations, such as size (calculated as approximate, not exact), IsEmpty, etc.

2. Additional Atomic map operation

Concurrenthashmap implements an interface that does not add, if any, delete, map, replace, etc.

3, Copyonwritearraylist

Features: copy -on-write, that is, every time the container is modified to copy the underlying array overhead , as long as the publication of an object that is not mutable, then access to the object without further synchronization

No concurrentmodificationexception thrown, no lock-up, better performance

The copy-on-write container should be used only if the iteration operation is much more than the modify Operation

Iii. blocking queues and producer consumer models

The blocking queue provides a blocking put and take method, as well as an offer and poll method that supports timing. (Offer method returns a failed state if the data cannot be added to the queue)

Can be bounded also can be unbounded

When building highly reliable applications, bounded queues are a powerful resource management tool that can or prevents excessive work items, making applications more robust in the event of overloaded workloads.

Implementation: Linkedblockingqueue, Arrayblockingqueue, Priorityblockingqueue (comparable method comparison sort can be implemented), Synchronousqueue (maintaining a set of worker threads , rather than maintaining the storage space of the queue elements)

1. Serial Thread closure

For Mutable objects, the producer-consumer design, along with the blocking queue, facilitates the serial thread closure, thus delivering object ownership from the producer to the consumer.

  The thread-enclosing object is owned by a single thread, but the ownership of the object is safely transferred through the producer consumer mode, and only the accepted thread obtains ownership of the object after the transfer, and the publisher abandons the ownership and does not access him.

2, the double-ended queue is suitable for the work to take

Deque and Blockingdeque have expanded the queue to achieve a double-ended queueing that can be removed from the head. (Implementation: Arraydeque,linkedblockingdeque)

Each consumer has its own double-ended queue, and when the consumer's own double-ended queue is empty, it will take the task from the end of the other consumer queue. Pros: Greatly reduces competition and ensures that threads are out of a busy state

Iv. blocking methods and interrupt methods

Cause of blocking: wait for the I/O operation to end, wait for a lock, wait for a wake from the Thread.Sleep method, or wait for another thread to calculate the result, etc.

When a thread is blocked, it is suspended, in a blocking state (blocked,waiting,timed_waiting), and must wait for an event that is not controlled by him to complete

The method of throwing interruptedexception is called blocking method.

Interrupts are a collaborative mechanism in which one thread cannot force other threads to stop the operation being performed to perform other operations.

Handling response to interrupts: passing interreuptedexception, throwing exceptions to method callers, or catching exceptions, doing some cleanup to throw exceptions, resuming interrupts: sometimes not throwing interruptedexception, such as in Runnable, You can resume the interrupt

Five, the Synchronization tool class

1, lockout: To ensure that certain activities until other activities are completed before continuing to execute

Latching acts as a door: the door is closed until it reaches the end state, and no thread can pass, and when the end state is reached, the door opens and allows all threads to pass. When the latch reaches the end state, it will no longer change state.

Countdownlatch: One or more threads wait for a set of events to occur. The latching state includes a counter that is initialized to a positive number, which indicates how many events need to wait. The countdown method decrements the counter, indicating that an event has occurred, while the await method blocks until the counter reaches 0

 1 public long timetasks (int nthreads, final Runnable Task) throws interruptedexception{2 final Countdownlatch St Artgate = new Countdownlatch (1); All threads start executing the task at the same time Valve 3 final Countdownlatch endgate = new Countdownlatch (nthreads);                 All threads end of Valve 4 5 for (int i=0; i<nthreads; i++) {6 thread t = new Thread () {7 @Override 8 public void Run () {9 try {ten startgate.await () ; Wait for the startgate value to be reduced to 011 try {task.run ();                     Nally{14 Endgate.countdown ();//A thread runs at the end with a value minus 115}16                 } catch (Interruptedexception e) {e.printstacktrace (); 18}19 }20};21 T.start ();}23 Long start = System.nanotime (); startgate.c Ountdown (); All Threads StartExecute Task25 endgate.await (); Wait for all threads to execute end of a long end = System.nanotime (); return end-start;28}

2, Futuretask

Futuretask is implemented by callable, which is equivalent to a Runnable that produces results, and can be in the following three states: Waiting to run, running, running complete (normal completion, cancellation, end of exception). When the Futuretask enters the completed state, it will remain in this state.

The future.get is used to obtain the results of the calculation and is blocked if the futuretask is not yet complete. Futuretask passes the calculated result from the thread that performs the calculation to the thread that gets the result, and the Futuretask specification ensures that the delivery process can achieve a secure release of the results.

 1 Import java.util.concurrent.Callable; 2 Import java.util.concurrent.ExecutionException; 3 Import Java.util.concurrent.FutureTask;  4 5 public class Preloader {6 private final futuretask<integer> future = new Futuretask<> (New callable () {7 public Integer call () throws Exception {8 return 969*99*99; 9}10});     One private final thread thread = new Thread (future), and public void Start () {Thread.Start (); 16         }17 public Integer get () throws Exception {try {return (Integer) Future.get (); 21 } catch (Executionexception e) {throwable cause = E.getcause (); Launderthrowable (cause ),}25}26 private static Exception launderthrowable (Throwable cause) {(Cause Instanceo             F runtimeexception) return (runtimeexception) cause;30 else if (cause instanceof Error) 31    Throw (Error) cause;32     Else33 throw new IllegalStateException ("Not Checked", cause);}35. public static void Main (St Ring[] args) throws Exception {PNs Preloader p = new Preloader (); P.start (); A-a long start = Syste M.currenttimemillis (); System.out.println (P.get ()); System.out.println (System.currenttimemillis ()-St ART); 42}43}

3. Signal Volume

The count Semaphore is used to control the number of simultaneous accesses to a particular resource, or the number of simultaneous executions of a specified operation. Or it can be used to implement a resource pool, or to impose boundaries on a container.

Semaphore manages a set of virtual licenses that have a specified number of constructors (1 is a mutex ),acquire requests permission (blocking until acquired, or interrupts, or timeouts),release releases a license

Using semaphore to implement a bounded blocking container

1 public class Boundedlist<t> {   2    3     private final list<t> List;   4     private Final Semaphore Semaphore;   5    6 Public     boundedlist (int bound) {   7         list = Collections.synchronizedlist (new linkedlist<t> ()) ;   8         semaphore = new semaphore (bound);   9     }  ten   public     boolean Add (T obj) throws interruptedexception {         semaphore.acquire ();         Boolean addedflag = false;         try {             addedflag = List.add (obj);  (  )-         finally {             if (!addedflag) {                 semaphore.release ();  (+             }  ) and         return addedflag;  The     public     boolean remove (Object obj) {         Removedflag = list.remove (obj );         if (removedflag) {             semaphore.release ();  (  )-         return removedflag;     }32}

4. Fence

Fences are similar to latching, which can block a group of threads until an event occurs

Latching is used to wait for events while fences are used to wait for other threads. Latching is a one-time object that cannot be reset once it enters the terminating state

Vi. building an efficient and scalable result cache

First, the HashMap is considered, and the Sychronized method is used to satisfy the atomicity

–> performance is poor, only one thread at a time to perform the calculation, using Concurrenthashmap to improve performance, without using synchronous methods, but may cause many threads to calculate the same value

–> considers blocking methods, uses Futuretask-based Concurrenthashmap,future.get implementation blocking to know the results are returned, reduces multiple computations, but is still not atomic

–> using the Putifabsent () in Concurrenthashmap

–> continues to address cache pollution issues, removing cache results when they expire, resolving cache overdue, cache cleanup, and more

 1 public class Memoizer <a, V> implements Computable<a, v> {2 private final concurrentmap<a, FUTURE&L T V>> Cache 3 = new concurrenthashmap<a, future<v>> (); 4 private Final Computable<a, v> C; 5 6 Public Memoizer (Computable<a, v> c) {7 this.c = C; 8} 9 Public V compute (final A ARG ) throws Interruptedexception {one while (true) {future<v> F = cache.get (ARG);  F (f = = null) {callable<v> eval = new callable<v> () {public V call ()                 Throws Interruptedexception {C.compute (ARG); 17}18 };19 futuretask<v> ft = new Futuretask<v> (eval), F = cache.putifabsent (ARG, FT); if (f = = null) {ft;23 F = ft.run (); 24}2 5}-try {f.get (); (); catch (Cancellationexception e) {29 Cache.remove (ARG, f); (Executionexception e) {$ throw Launderthrowable.launderthro Wable (E.getcause ()); 32}33}34}35}

Java Concurrency Programming Combat: Fifth----basic building blocks

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.