The previous article said that, at run time, program = code + data. Then concurrent programming can have two strategies, code concurrency and data concurrency.
Code concurrency
The premise of code concurrency is that our code, the exact point should be calculated , can be segmented, divided into a small piece of a small piece of the calculation, and do not depend on each other. Abstract example below,
class Workertask implements Runnable {private data data; private Splittedcode Code; private Countdownlatch latch; public workertask (Data data, Splittedcode code, Countdownlatch latch) {this . Data = data; this . Code = code; this . Latch = latch; } @Override public void run () {try { Code.run (data); } finally {latch.countdown (); } }}
Class Leadertask implements Runnable {PrivateData data;PrivateSplittedcode Code; Public Leadertask(Data data, Splittedcode code) { This. data = data; This. Code = Code; }@Override Public void Run() {splittedcode[] codes = Code.split (); Countdownlatch latch =NewCountdownlatch (codes.length); for(Splittedcode code:codes) {Lwthreadpoolprovider.workerpool (). Submit (NewWorkertask (data, Code, latch)); }Try{latch.await (); }Catch(Interruptedexception e) {E.printstacktrace ();//TODO} }}
publicclass LWExecutor implements Executor { @Override publicvoidexec(Code code, Data data) { ifinstanceof SplittedCode) { LWThreadPoolProvider.leaderPool().submit( new LeaderTask(data, (SplittedCode)code)); } }}
Using the grading thread pool , the overall calculation is submitted to LeaderPool
, split the LeaderPool
calculation, split into a small calculation submitted to WorkerPool
. Two thread pools are used differently RejectedExecutionHandler
.
Public StaticExecutorserviceLeaderpool() {if(Leader_pool = =NULL) {Leader_pool =NewThreadpoolexecutor (GetConfig (). Getleaderpoolcoresize (), GetConfig (). Getleaderpoolma Xsize (), GetConfig (). Getleaderthreadkeepaliveseconds (), Timeunit.seconds,NewLinkedblockingdeque<runnable> (GetConfig (). Getleadertaskqueuesize ()),NewThreadpoolexecutor.abortpolicy ()); }returnLeader_pool; } Public StaticExecutorserviceWorkerpool() {if(Worker_pool = =NULL) {Worker_pool =NewThreadpoolexecutor (GetConfig (). Getworkerpoolcoresize (), GetConfig (). Getworkerpoolma Xsize (), GetConfig (). Getworkerthreadkeepaliveseconds (), Timeunit.seconds,NewLinkedblockingdeque<runnable> (GetConfig (). Getworkertaskqueuesize ()),NewThreadpoolexecutor.callerrunspolicy ()); }returnWorker_pool; }
Data concurrency
Similarly, data concurrency is the splitting of data,
class PagedTask implements Runnable { private Code code; private PagedData data; publicPagedTask(intint pageSize, Code code, PagedData data) { this.code = code; this.data = data.subData(taskIndex*pageSize, (taskIndex+1)*pageSize); } @Override publicvoidrun() { code.run(data); }}
Public class pagedexecutor implements Executor { Private intTasknum;Private intPageSize;PrivateExecutorservice executor; Public Pagedexecutor(intTasknum,intPageSize,intThreads) { This. tasknum = Tasknum; This. pageSize = pageSize; Executor = Executors.newfixedthreadpool (threads); }@Override Public void exec(Code code, data data) {if(DatainstanceofPageddata) { for(intTaskindex =0; Taskindex < Tasknum; taskindex++) {Pagedtask task =NewPagedtask (Taskindex, PageSize, Code, (pageddata) data); Executor.submit (Task); } } }}
Splits the data, assigns it to different tasks, each task has its own taskIndex
, and all the tasks execute concurrently.
It's a little simple, Pat ^_^.
The concurrency I understand: two strategies