Transferred from: http://www.cnblogs.com/panfeng412/p/java-program-tuning-reading-notes-of-concurrent-program-design-pattern.html
Here summarizes a few common parallel programming methods, some of the text from the "Java Program Performance optimization" In the book, there are some of the text belongs to the individual summary, if not, please point out the discussion.
Future mode
In a word, the processing of the client's request is changed from synchronous to asynchronous, so that the client is freed up and can do something else during the service-side process, and then the result of the request is finally taken.
The advantage is that there is no need to wait during the whole call process, which can make full use of all the time fragments and improve the response speed of the system.
The JDK has built-in futuretask that are easy to use and can be used to cancel future tasks or set a time-out for future tasks.
1) Implement a callable interface to realize the specific business logic calculation of the service side:
public class Realdata implements callable<string> {public String call () throws Exception {...}}
2) define Futuretask, commit execution, get return result:
1. Create future taskfuturetask<string> future = new Futuretask<string> (New Realdata ("AAA"));//2. Submit the future Taskexecutorservice Exexutor = Executors.newfixedthreadpool (1), Exector.submit (future);//3. Do other thing......//4. Get result, waiting Util call method FinishedSystem.out.println ("Result:" + future.get ());
Master-worker mode
Implemented by two types of threads: the master thread is responsible for receiving and distributing tasks (splitting tasks into subtasks), the worker thread is responsible for processing subtasks, each worker thread handles only a subset of tasks, and all worker threads work together to handle all tasks.
The advantage is the ability to split a large task into several small tasks, which can be handled in parallel by different workers, thus increasing the throughput of the system. In addition, once the client has submitted the task, the master thread returns immediately after the task has been received and distributed, so the entire process is asynchronous for the client.
The general idea of implementation is as follows:
1) Master first needs to maintain a queue of queues, to receive tasks, while maintaining a threadmap of all worker threads, and each subtask corresponding to the processing result set Resultmap, here because of the multi-threaded simultaneous access to Resultmap, Therefore, the use of concurrenthashmap in the JDK is generally implemented;
2) The worker thread implements runnable or inherits the thread, obtains the split sub-task through the queue in master, and carries out business processing, and sets the processing result to Resultmap so that master gets to it;
3) The main entry function is responsible for the client request submission (which requires a process disassembly), and a merge after fetching the results of each worker through master, and finally returns to the client to complete the process.
Guarded suspension mode
The core idea of the so-called "protection pause" model is to provide services only when the service process is ready.
The advantage is that all client requests are not lost, and the server crashes due to too many requests at the same time, which effectively reduces the instantaneous load of the system and helps the system to be stable.
In fact, this type of buffer through the middle of a queue to do a lot of work, similar to "Clientthread, Request Queue-and serverthread" situation abound, It's just possible that in practice we tend to use it in conjunction with other methods, such as:
1) The Clientthread and Serverthread are multiple, then become the classic "producer-consumer" mode;
2) If the Serverthread is split into 1 Master and multiple workers, it is the "Master-worker" mode mentioned above;
3) If the request to be processed needs to return a result, then it needs to be combined with futuretask (that is, the client needs to take futuredata in the request and Realdata on Futuredata in Serverthread).
Invariant mode
In concurrent multi-threaded program, when multithreading to read and write to the same object, in order to ensure the consistency and accuracy of object data, synchronization must be performed, and this is where the system performance loss is serious. Therefore, in order to improve the performance of concurrent concurrent programs, we can create an immutable object that remains invariant during use. This is the so-called "unchanging" pattern. This pattern is used extensively in Java, such as String, Boolean, short, Integer, Long, Byte, and so on.
The advantage is that the multi-threaded concurrency access control is handled by avoiding the problem rather than solving the problem, but the disadvantage is that it only applies when the internal state and data are not changed after the object is created.
The implementation of the invariant mode in Java is very simple, according to OO thought, only need to meet the following points:
1) Set all properties of the object to private final;
2) Ensure that the class is not inherited by the final modified class;
3) Remove all SETTXX methods in the object;
4) A constructor that contains all the attributes is used to create the object.
Producer-Consumer model
The producer thread submits the task to the memory buffer, and the consumer thread fetches the task from the memory buffer and processes it.
The advantage is to decouple producer and consumer threads, optimize the overall structure of the system, and mitigate the impact of performance bottlenecks on system performance.
In Java, Linkedblockingqueue is generally used as the "memory buffer" above, which is an implementation of the blocking type Blockingqueue using the link list, which uses two different locks on the head and tail. Improves throughput compared to arrayblockingqueue, and is suitable for the "producer-consumer" model. The general idea of implementation is as follows:
1) Create the producer class to implement the Run method for submitting tasks;
2) Create the consumer class to implement the Run method for processing tasks;
3) The main function establishes a buffer, several producers, several consumers, creates a thread pool and begins to make these threads work.
Summary of Java parallel programming pattern