Java High concurrency design pattern

Source: Internet
Author: User
Tags volatile

This article mainly explains several common parallel patterns, specific directory structure such as.

Single case

Singleton is the most common design pattern, which is commonly used for global object management, such as XML configuration reading and writing.

Generally divided into lazy-type, a hungry man-style.

Lazy type: Add synchronized to the method

public static synchronized Singleton getinstance () {

if (single = null) {

Single = new Singleton ();

}

return single;

}

In this way, it is not recommended to get a lock for every sample acquisition, poor performance

Lazy: Use double check lock + volatile

Private volatile Singleton Singleton = null;

public static Singleton getinstance () {

if (singleton = = null) {

Synchronized (Singleton.class) {

if (singleton = = null) {

Singleton = new Singleton ();

}

}

}

return singleton;

}

This method is an optimization that locks directly on the method, and the advantage is that only the first initialization acquires the lock.

Subsequent calls to getinstance are no longer locked. It's just a little bit more cumbersome.

As for why the volatile keyword, mainly related to JDK command reflow, see the previous CSDN blog: Java memory model and command reflow

Lazy: Use static inner class

public class Singleton {

private Static Class Lazyholder {

private static final Singleton INSTANCE = new Singleton ();

}

Private Singleton () {}

public static final Singleton getinstance () {

return lazyholder.instance;

}

}

This method not only solves the synchronization problem, but also solves the tedious problem. It is recommended to use the rewrite method.

The disadvantage is that you cannot respond to events to reinitialize instance.

A Hungry man type

public class Singleton1 {

Private Singleton1 () {}

Private static final Singleton1 single = new Singleton1 ();

public static Singleton1 getinstance () {

return single;

}

}

The disadvantage is that the object is initialized directly at the outset.

Future mode

The core idea of this pattern is asynchronous invocation. A bit like an asynchronous Ajax request.

When a method is called, the method may take a long time, and in the main function it is not eager to get the result immediately.

This allows the caller to return a credential immediately, which is placed in another thread execution,

The following main function takes the voucher to get the execution result of the method, the structure diagram is as follows

The JDK has built-in support for future mode, with the following interfaces:

Implemented through Futuretask

Note Two of these time-consuming operations.

If the dootherthing takes 2s, the entire function takes about 2s of time.

If the dootherthing takes 0.2s, the entire function takes time depending on the realdata.costtime, which is about 1s end.

public class FutureDemo1 {

public static void Main (string[] args) throws Interruptedexception, Executionexception {

Futuretask future = new Futuretask (new callable () {

@Override

Public String Call () throws Exception {

return new Realdata (). Costtime ();

}

});

Executorservice service = Executors.newcachedthreadpool ();

Service.submit (future);

System.out.println ("Realdata method call complete");

Simulate other time-consuming operations in the main function

Dootherthing ();

Get the results of the Realdata method

System.out.println (Future.get ());

}

private static void Dootherthing () throws Interruptedexception {

Thread.Sleep (2000L);

}

}

Class Realdata {

Public String Costtime () {

try {

Analog Realdata time-consuming operation

Thread.Sleep (1000L);

return "result";

} catch (Interruptedexception e) {

E.printstacktrace ();

}

Return "Exception";

}

}

Through future implementation

Unlike the above futuretask, Realdata needs to implement the callable interface.

public class FutureDemo2 {

public static void Main (string[] args) throws Interruptedexception, Executionexception {

Executorservice service = Executors.newcachedthreadpool ();

Future future = Service.submit (New RealData2 ());

System.out.println ("RealData2 method call complete");

Simulate other time-consuming operations in the main function

Dootherthing ();

Get the results of the RealData2 method

System.out.println (Future.get ());

}

private static void Dootherthing () throws Interruptedexception {

Thread.Sleep (2000L);

}

}

Class RealData2 implements callable{

Public String Costtime () {

try {

Analog Realdata time-consuming operation

Thread.Sleep (1000L);

return "result";

} catch (Interruptedexception e) {

E.printstacktrace ();

}

Return "Exception";

}

@Override

Public String Call () throws Exception {

return Costtime ();

}

}

In addition, the future itself provides some additional simple control functions, with the following API

Cancel a task

Boolean Cancel (Boolean mayinterruptifrunning);

Have you canceled

Boolean iscancelled ();

Whether it has been completed

Boolean IsDone ();

Get Return Object

V get () throws Interruptedexception, executionexception;

Gets the return object and can set the time-out

V get (long timeout, timeunit unit)

Throws Interruptedexception, Executionexception, timeoutexception;

Production consumer Model

Producer-consumer mode is a classic multithreaded design pattern. It provides a good solution for collaboration between multiple threads.

In producer-consumer mode, typically consists of two types of threads, that is, several producer threads and a number of consumer threads.

The producer thread is responsible for submitting the user request, while the consumer thread is responsible for specifically handling the tasks submitted by the producer.

The producer and consumer communicate through a shared memory buffer, which is structured as follows

Pcdata for the metadata model we need to process, the producer builds the Pcdata and puts it into the buffer queue.

The consumer fetches the data from the buffer queue and performs the calculation.

Producer Core Code

while (isrunning) {

Thread.Sleep (R.nextint (sleep_time));

data = new PCData (count.incrementandget);

Constructing task data

SYSTEM.OUT.PRINTLN (Data + "is put into queue");

if (!queue.offer (data, 2, timeunit.seconds)) {

Putting data in a queue buffer

System.out.println ("Faild to put data:" + data);

}

}

Consumer Core Code

while (true) {

PCData data = Queue.take ();

Extracting tasks

if (data! = NULL) {

Get data, perform calculation operations

int re = Data.getdata () * 10;

System.out.println ("After Cal, value is:" + re);

Thread.Sleep (R.nextint (sleep_time));

}

}

The production consumer model can effectively decouple the data and optimize the system structure.

Reduce the dependency and performance requirements between producer and consumer threads.

In general, Blockingqueue is used as a data buffer queue, and he is using locks and blocking to synchronize data between

If you have performance requirements for the buffer queue, you can use concurrentlinkedqueue based on the CAS unlocked design.

Divide

Strictly speaking, divide and conquer does not count as a pattern, but as a thought.

It can split a large task into parallel execution for several small tasks, improving system throughput.

We mainly talk about two scenes, Master-worker mode, Forkjoin thread pool.

Master-worker mode

The core idea of the model is that the system is assisted by two classes: Master process, worker process.

Master is responsible for receiving and??? Task, the worker is responsible for handling the task. When the individual worker finishes processing,

Return the results to master for generalization and summary.

Assuming a scenario, you need to calculate 100 tasks and sum the results, Master holds 10 sub-processes.

Master Code

public class Masterdemo {

A collection of costumed tasks

Private Concurrentlinkedqueue WorkQueue = new Concurrentlinkedqueue ();

All worker

Private HASHMAP workers = new hashmap<> ();

Results of each worker executing tasks in parallel

Private Concurrenthashmap Resultmap = new concurrenthashmap<> ();

Public Masterdemo (Workerdemo worker, int workercount) {

Each worker object is required to hold a queue reference for the lead task and submit the result

Worker.setresultmap (RESULTMAP);

Worker.setworkqueue (WorkQueue);

for (int i = 0; i < Workercount; i++) {

Workers.put ("Child node:" + I, new Thread (worker));

}

}

Submit a Task

public void Submit (Taskdemo Task) {

Workqueue.add (Task);

}

Start all subtasks

public void execute () {

For (Map.entry Entry:workers.entrySet ()) {

Entry.getvalue (). Start ();

}

}

Determine if all of the tasks are executed at the end

public Boolean iscomplete () {

For (Map.entry Entry:workers.entrySet ()) {

if (Entry.getvalue (). GetState ()! = Thread.State.TERMINATED) {

return false;

}

}

return true;

}

Get the results of the final rollup

public int GetResult () {

int result = 0;

For (Map.entry Entry:resultMap.entrySet ()) {

Result + = Integer.parseint (Entry.getvalue (). toString ());

}

return result;

}

}

Worker Code

public class Workerdemo implements runnable{

Private Concurrentlinkedqueue WorkQueue;

Private Concurrenthashmap Resultmap;

@Override

public void Run () {

while (true) {

Taskdemo input = This.workQueue.poll ();

All tasks have been executed.

if (input = = null) {

Break

}

Simulate the processing of a task and return the result

int result = Input.getprice ();

This.resultMap.put (Input.getid () + "", result);

SYSTEM.OUT.PRINTLN ("task completed, Current thread:" + thread.currentthread (). GetName ());

}

}

Public Concurrentlinkedqueue Getworkqueue () {

return workQueue;

}

public void Setworkqueue (Concurrentlinkedqueue workQueue) {

This.workqueue = WorkQueue;

}

Public Concurrenthashmap Getresultmap () {

return resultmap;

}

public void Setresultmap (Concurrenthashmap resultmap) {

This.resultmap = Resultmap;

}

}

public class Taskdemo {

private int id;

private String name;

private int price;

public int getId () {

return ID;

}

public void setId (int id) {

This.id = ID;

}

Public String GetName () {

return name;

}

public void SetName (String name) {

THIS.name = name;

}

public int GetPrice () {

return price;

}

public void Setprice (int. price) {

This.price = Price;

}

}

Main function test

Masterdemo master = new Masterdemo (new Workerdemo (), 10);

for (int i = 0; i <; i++) {

Taskdemo task = new Taskdemo ();

Task.setid (i);

Task.setname ("task" + i);

Task.setprice (New Random (). Nextint (10000));

Master.submit (Task);

}

Master.execute ();

while (true) {

if (Master.iscomplete ()) {

SYSTEM.OUT.PRINTLN ("The result of execution is:" + master.getresult ());

Break

}

}

Forkjoin thread Pool

The thread pool is a framework of parallel execution tasks introduced after Jdk7, and its core idea is to divide the task into subtasks.

It is possible that the subtasks are still very large and need to be further disassembled, eventually getting a small enough task.

The split subtasks are placed in a double-ended queue, and several boot threads get the task execution from the double-ended queue.

The result of the subtask execution is placed in a queue, and the other thread fetches the data from the queue, merging the results.

Suppose our scenario needs to calculate the cumulative sum from 0 to 20000000L. Counttask inherit from Recursivetask and can carry the return value.

Each time a large task is decomposed, the task is simply divided into 100 small tasks of equal size, and the sub-task is submitted using fork ().

Sets the threshold for sub-task decomposition through threshold in subtasks, and if the total number of sums currently required is greater than threshold, the subtasks need to be decomposed again,

If the subtask can be executed directly, the sum is performed and the result is returned. Eventually waits for all subtasks to complete and sums all the results.

public class Counttask extends recursivetask{

Threshold value for Task decomposition

private static final int THRESHOLD = 10000;

Private long start;

private long end;

Public Counttask (long start, long end) {

This.start = start;

This.end = end;

}

Public Long Compute () {

Long sum = 0;

Boolean cancompute = (End-start) < THRESHOLD;

if (Cancompute) {

for (long i = start; I <= end; i++) {

sum + = i;

}

} else {

Divided into 100 small tasks

Long step = (start + end)/100;

ArrayList subtasks = new ArrayList ();

Long pos = start;

for (int i = 0; i <; i++) {

Long LaStone = pos + step;

if (LaStone > End) {

LaStone = end;

}

Counttask subTask = new Counttask (POS, LaStone);

POS + = step + 1;

Pushing subtasks to the thread pool

Subtasks.add (SubTask);

Subtask.fork ();

}

for (Counttask task:subtasks) {

Join the results

Sum + = Task.join ();

}

}

return sum;

}

public static void Main (string[] args) throws Executionexception, Interruptedexception {

Forkjoinpool pool = new Forkjoinpool ();

Cumulative Sum 0-20000000L

Counttask task = new Counttask (0, 20000000L);

Forkjointask result = Pool.submit (Task);

SYSTEM.OUT.PRINTLN ("sum result:" + result.get ());

}

}

The Forkjoin thread pool uses a lock-free stack to manage idle threads, which may be suspended if a worker thread temporarily does not get the available tasks.

The suspended thread is pressed into the stack maintained by the thread pool, and the threads are awakened from the stack when a task becomes available in the future.



Ouyanghaiyang
Links: https://www.jianshu.com/p/b5e6100a4051
Source: Pinterest
The copyright of the book is owned by the author, and any form of reprint should be contacted by the author for authorization and attribution.

Java High concurrency design pattern

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.