Java multi-thread design mode (3)

Source: Internet
Author: User

Java multi-thread design model Reading Notes 3

Directory:
1 thread-per-message Pattern
2 worker Thread pattern
3 Future Pattern

=========================== Thread-per-message pattern ============================

Thread per message, one thread for each message. Message can be considered as a "command" or "request" here. Allocate a thread for each command or request, which executes the work. This is thread-per-message pattern.

Applicable scenarios of thread-per-message pattern:
1. It is applicable when the operation sequence does not matter.
2. When no return value is required.
3. It can be applied to the production of servers to improve responsiveness and reduce latency.

Processes and threads
1. The biggest difference between a process and a thread is whether the memory can be shared.
Generally, each process has its own memory space. The process cannot read data without authorization and rewrite the memory space of other processes. Because the memory space of processes is independent of each other, the process does not have to worry about the danger of being damaged by other processes. The thread is shared memory.
2. Another difference between a process and a thread lies in the frequency of context-switch.
Process switching requires a lot of information to be stored and retained, so it takes some time to switch. However, the context information to be managed by the thread is much less than that of the process. Therefore, the thread context-switch is much faster than the process context-switch.

=========================== Worker Thread pattern ==========================

The thread pool mechanism mainly solves the cost of creating a new thread for each request. Start the thread (worker thread) used to execute the work in advance. And use producer-consumer pattern to pass the instance indicating the work content to the worker thread. In this way, the worker thread will be responsible for the execution, and there is no need to start a new thread all the time.
Implementation:

Work zone (CORE)
Public class channel {
// Maximum number of requests allowed
Private Static final int max_request = 100;
// Request container
Private final request [] requestqueue;
Private int tail; // The place where the next putrequest is located.
Private int head; // the location of the next takerequest
Private int count; // number of requests
// Working thread group
Private Final workerthread [] threadpool;
Public channel (INT threads ){
// Initial request container
This. requestqueue = new request [max_request];
This. Head = 0;
This. Tail = 0;
This. Count = 0;
// Initialize the working thread group
Threadpool = new workerthread [threads];
For (INT I = 0; I <threadpool. length; I ++ ){
Threadpool [I] = new workerthread ("worker-" + I, this );
}
}
// The worker starts to work.
Public void startworkers (){
For (INT I = 0; I <threadpool. length; I ++ ){
Threadpool [I]. Start ();
}
}
// Add request
Public synchronized void putrequest (request ){
// Wait when the number of container buffer requests exceeds the container capacity
While (count> = requestqueue. Length ){
Try {
Wait ();
} Catch (interruptedexception e ){
}
}
Requestqueue [tail] = request;
// Obtain the location of the next request
Tail = (tail + 1) % requestqueue. length;
Count ++;
Policyall ();
}
// Process the request
Public synchronized request takerequest (){
// Wait when the number of requests is null
While (count <= 0 ){
Try {
Wait ();
} Catch (interruptedexception e ){
}
}
Request request = requestqueue [head];
// Obtain the location of the next request
Head = (Head + 1) % requestqueue. length;
Count --;
Policyall ();
Return request;
}
}

Request object
Public class request {
Private final string name; // delegate
Private Final int number; // request number
Private Static final random = new random ();
Public request (string name, int number ){
This. Name = Name;
This. Number = number;
}
Public void execute (){
System. Out. println (thread. currentthread (). getname () + "executes" + this );
Try {
Thread. Sleep (random. nextint (1000 ));
} Catch (interruptedexception e ){
}
}
Public String tostring (){
Return "[request from" + name + "no." + number + "]";
}
}
Worker thread
Public class workerthread extends thread {
Private Final channel;
Public workerthread (string name, channel ){
Super (name );
This. Channel = channel;
}
Public void run (){
While (true ){
Request request = channel. takerequest ();
Request.exe cute ();
}
}
}
Put into request thread
Public class clientthread extends thread {
Private Final channel;
Private Static final random = new random ();
Public clientthread (string name, channel ){
Super (name );
This. Channel = channel;
}
Public void run (){
Try {
For (INT I = 0; true; I ++ ){
Request request = new request (getname (), I );
Channel. putrequest (request );
Thread. Sleep (random. nextint (1000 ));
}
} Catch (interruptedexception e ){
}
}
}
Test class
Public class main {
Public static void main (string [] ARGs ){
Channel channel = new channel (5); // Number of worker threads
Channel. startworkers ();
New clientthread ("Alice", channel). Start ();
New clientthread ("Bobby", channel). Start ();
New clientthread ("Chris", channel). Start ();
}
}

============================ Future pattern ============================

Future is the meaning of "future" and "futures. Suppose there is a method that takes some time to execute, so we don't have to wait for the execution result to come out and get a replacement "Bill of Lading ". Because it does not take time to obtain a bill of lading, this "Bill of Lading" is the future participant. The thread that gets the future participant will get the execution result later. It's like receiving a cake from a bill of lading. If there are already execution results, you can get the data right away. If the execution result is not good, wait until the execution result appears.
Thread-per-message pattern is to give time-consuming work to other threads to improveProgram. However, if we need to get the results processed by other threads, it will not work. If synchronous execution takes some time, the program will be less responsive. However, if the asynchronous execution starts, the results cannot be obtained immediately. In this case, use the future pattern. First, we create a future participant with the same API as the processing result. Then, when processing starts, the future participant is treated as the return value. The real result is not set to the future participant until other threads finish processing. Client participants can get the processing results through the future participants. With this pattern, the responsiveness is not reduced and the desired processing result is obtained.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.