Evaluation and implementation of Active object mode
The Active object mode implements asynchronous programming by separating the invocation of the method from execution. It is advantageous to increase the concurrency, and thus improve the throughput rate of the system.
Another benefit of the Active object pattern is that it separates the commit of the task (methodrequest) from the Execution policy (execution policy) of the task (calling the Async method). The execution strategy of the task is encapsulated within the implementation class of the scheduler, so it is not "visible" to the outside, and it will not affect other code if it needs to change, which reduces the coupling of the system. The execution strategy of a task can reflect some of the following issues:
- In what order do tasks, such as FIFO, LIFO, or priority based on the information contained in the task, be performed?
- How many tasks can be executed concurrently?
- How many tasks can be queued for execution?
- If a task is rejected due to a system overload, which task should be selected as a victim, how should the application be notified?
- What do I need to do before and after a task executes?
This means that the order in which the tasks are executed can be different from the order in which the tasks are submitted, either single-threaded or multi-threaded to perform tasks, and so on.
Of course, there is always a price behind the benefits, and the Active object pattern has its cost in implementing asynchronous programming. There are more than 6 participants in the pattern, and the implementation process includes a number of intermediate processes: the generation of Methodrequest objects, the movement of Methodrequest objects (in and out of the buffer), the running schedules of Methodrequest objects, and the thread context switches. These processes have a cost of space and time. Therefore, the Active object pattern is suitable for decomposing a time-consuming task (such as a task involving I/O operations): Separating the initiation and execution of the task to reduce unnecessary wait times.
Although there are more participants in the pattern, as the implementation code for this case demonstrates, most of the participants can be implemented using the classes provided by the JDK itself to save encoding time. As shown in table 1.
Table 1. Implementing some participants in active object using JDK existing classes
Participant name |
you can borrow it. JDK class |
Note |
Scheduler |
Java Executor A related implementation class for the Java.util.concurrent.ExecutorService interface in the framework, such as Java.util.concurrent.ThreadPoolExecutor. |
The Submit (callable<t> Task) method defined by the Executorservice interface is equivalent to the Enqueue method in Figure 2. |
Activationqueue |
Java.util.concurrent.LinkedBlockingQueue |
If scheduler takes Java.util.concurrent.ThreadPoolExecutor, the Java.util.concurrent.LinkedBlockingQueue instance is used as the parameter of the Threadpoolexecutor constructor. |
Methodrequest |
The anonymous implementation class for the Java.util.concurrent.Callable interface. |
The advantage of the callable interface over the Runnable interface is that the call method it defines has a return value that facilitates passing the return value to the future instance. |
Future |
Java.util.concurrent.Future |
The return value type of the Submit (callable<t> Task) method defined by the Executorservice interface is java.util.concurrent.Future. |
Error isolation
Error isolation refers to the processing of one task without affecting the handling of other tasks. Each methodrequest instance can be considered a task. Then, the Scheduler implementation class needs to be aware of error isolation when executing methodrequest. One of the benefits of using out-of-the-box classes in the JDK (such as threadpoolexecutor) to implement scheduler is that these classes may have implemented false isolation. If you write code to implement scheduler and perform all tasks one at a time with a single active object worker thread, you need to pay particular attention to the exception handling of the thread's Run method, ensuring that the entire thread is not terminated because of some runtime exceptions encountered by individual tasks. As shown in the example code in Listing 6.
Listing 6. Make your own scheduler Error isolation sample code
PublicClassCustomschedulerImplementsRunnable {Private linkedblockingqueue<runnable> Activationqueue =New Linkedblockingqueue<runnable> ();@OverridePublicvoidRun() {dispatch ();}Public <T>Future<t>Enqueue(callable<t> methodrequest) {Final futuretask<t> task =New Futuretask<t> (methodrequest) {@OverridePublicvoidRun() {try {Super.run ();//captures the object that may be thrown, preventing the task from failing to run and terminating the thread on which it is located. } catch (Throwable t) { this.setexception (t);}}; try {activationqueue.put (Task);} catch (Interruptedexception e) {thread.currentthread (). interrupt ();} return task;} public void Dispatch() {while (true) {Runnable methodrequest; try {methodrequest = Activationqueue.take (); Code that prevents the execution of individual tasks from causing thread termination methodrequest.run () in the Run method;} catch (Interruptedexception e) {//Handle the exception} }}}
Buffer monitoring
If Activationqueue is a bounded buffer, monitoring the current size of the buffer is meaningful both for operations and for testing. From a test point of view, the monitoring buffer helps to determine the recommended value for buffer capacity (reasonable value). The code shown in Listing 3 is to monitor the size of the buffer by periodically calling Threadpoolexecutor's Getqueue method through a timed task. Of course, when monitoring a buffer, you often need only approximate values, so avoid unnecessary locks in your monitoring code.
Buffer Saturation processing policy
When a task's commit rate is greater than the task's execution rate, the buffer may gradually accumulate to full. The newly submitted task is then rejected. Whether you write your own code or use the JDK's existing class to implement scheduler, we need a processing policy to determine which task will become a "victim" when the buffer is full of a current task submission failure. The advantage of using Threadpoolexecutor to implement scheduler is that it already provides several implementation codes for buffer saturation processing policies that can be called directly by the application code. As the code in Listing 3 shows, we chose to discard the oldest task as a processing strategy in this case. The Java.util.concurrent.RejectedExecutionHandler interface is an abstraction of the threadpoolexecutor buffer saturation processing strategy, as shown in table 2 for the specific implementation provided in the JDK.
Table 2. The JDK provides a buffer saturation processing policy implementation class
Implementation class |
The implementation of the processing strategy |
Threadpoolexecutor.abortpolicy |
Throws an exception directly. |
Threadpoolexecutor.discardpolicy |
Discards the currently rejected task (without throwing any exceptions). |
Threadpoolexecutor.discardoldestpolicy |
Discard the oldest task in the buffer and try to accept the rejected task again. |
Threadpoolexecutor.callerrunspolicy |
Run the rejected task in the submitter thread of the task. |
Of course, for Threadpoolexecutor, the full work queue does not necessarily mean that the newly submitted task is rejected. When its maximum thread pool size is larger than its core thread pool size, when the work queue is full, the newly submitted task is executed with new threads other than all core threads, and the newly committed task is rejected until the number of worker threads reaches the maximum number of threads.
Scheduler
Idle worker thread cleanup
If scheduler uses multiple worker threads, such as a thread pool with threadpoolexecutor, to perform tasks. You may need to clean up idle threads to conserve resources. The code in Listing 3 uses the existing functionality of Threadpoolexecutor directly, when initializing its instance by specifying the 3rd, 4 parameters of its constructor (long KeepAliveTime, timeunit unit), Tell Threadpoolexecutor that the thread Cheng Joechi outside the core worker thread has been idle for a specified time, then it is cleaned out.
REUSABLE
Active Object
Pattern Implementation
Although the use of off-the-shelf classes in the JDK can greatly simplify the implementation of the active object pattern. But if you need to use active object mode frequently in different scenarios, you need a more reusable code to save time on coding and make your code easier to understand. Listing 7 shows the implementation code for a proxy participant for a reusable active object pattern based on the Java dynamic Agent.
Listing 7. Reusable active Object mode Proxy participant implementation
Import Java.lang.reflect.InvocationHandler;Import java.lang.reflect.InvocationTargetException;Import Java.lang.reflect.Method;Import Java.lang.reflect.Proxy;Import java.util.concurrent.Callable;Import Java.util.concurrent.ExecutorService;Import Java.util.concurrent.Future;PublicAbstractClassActiveobjectproxy {PrivateStaticClassDispatchinvocationhandlerImplementsInvocationhandler {PrivateFinal Object delegate;PrivateFinal Executorservice Scheduler;PublicDispatchinvocationhandler(Object delegate, Executorservice Executorservice) {This.delegate = delegate;This.scheduler = Executorservice;}Private StringMakedelegatemethodname(Final method,Final object[] arg) {String name = Method.getname (); name ="Do" + character.touppercase (Name.charat (0) + name.substring (1);return name;@OverridePublic ObjectInvoke(Final Object Proxy,Final method,Final object[] args)Throws Throwable {Object returnvalue =NullFinal Object delegate =This.delegate;Final Method Delegatemethod;If the called method that is intercepted is an async method, it is forwarded to the appropriate Doxxx methodif (Future.class.isAssignableFrom (Method.getreturntype ())) {Delegatemethod = Delegate.getclass (). GetMethod ( Makedelegatemethodname (method, args), method.getparametertypes ());Final Executorservice Scheduler =This.scheduler; Callable<object> methodrequest =New Callable<object> () {@OverridePublic ObjectPager()Throws Exception {Object RV =Nulltry {RV = Delegatemethod.invoke (delegate, args);}catch (IllegalArgumentException e) {ThrowNew Exception (e);}catch (Illegalaccessexception e) {ThrowNew Exception (e);}catch (InvocationTargetException e) {ThrowNew Exception (e);}return RV;}; future<object> future = Scheduler.submit (methodrequest), returnvalue = future;}else {If the intercepted method call is not an async method, the direct forward Delegatemethod = Delegate.getclass (). GetMethod (Method.getname (), Method.getparametertypes ( )); returnvalue = Delegatemethod.invoke (delegate, args);}return returnvalue;}}/** * Generates an instance of an active Object proxy that implements the specified interface. * Calls to asynchronous methods defined by Interf are loaded into the corresponding doxxx method of servant. * @param interf the active object interface to be implemented * @param servant Active Servant Participant instance of Object * @param scheduler Actor Instance Scheduler Active Object * @return the proxy participant instance of Active object */public Static <T> t newinstance ( Class<t> Interf, Object servant, Executorservice Scheduler) { @SuppressWarnings ( "unchecked") T F = (t) proxy.newproxyinstance (Interf.getclassloader (), new class[] {interf}, new Dispatchinvocationhandler (servant, scheduler)); return F;}
The code in Listing 7 implements the reusable active object mode proxy participant Activeobjectproxy. Activeobjectproxy dynamically generates proxy objects for the specified interface by using a Java dynamic agent. A call to an asynchronous method of the proxy object (that is, a method that returns a value of type Java.util.concurrent.Future) is intercepted by the Activeobjectproxy implementation Invocationhandler (Dispatchinvocationhandler) , and forwards the servant processing specified in the Newinstance method of the Activeobjectproxy.
The code shown in Listing 8 shows the fast active object mode by using the Activeobjectproxy.
Listing 8. Fast implementation of active object mode based on reusable APIs
public Static void main (String[] args) Throws Interruptedexception, executionexception {sampleactiveobject SAO = activeobjectproxy.newinstance ( Sampleactiveobject.class, new Sampleactiveobjectimpl (), Executors.newcachedthreadpool ( )); future<string> ft = sao.process (1); Thread.Sleep (500); System. out.println (Ft.get ());
The code from listing 8 is visible, using the reusable active object mode proxy implementation, as long as the application developer specifies the active The object mode is externally reserved (corresponding to the 1th parameter of the Activeobjectproxy.newinstance method), and provides an implementation class for the interface (corresponding to the 2nd parameter of the Activeobjectproxy.newinstance method), and then specifies a Java The. Util.concurrent.ExecutorService instance, which corresponds to the 3rd parameter of the Activeobjectproxy.newinstance method, implements the active object pattern.
Summarize
This article describes the intent and architecture of the active object mode. and provides a practical case to demonstrate the use of Java code to implement the active object mode, on this basis, the model is evaluated and shared in the actual use of the pattern needs to be noted.
Reference Resources
- Source code for this article read online: https://github.com/Viscent/JavaConcurrencyPattern/
- Wikipedia Active Object mode entry: Http://en.wikipedia.org/wiki/Active_object
- Douglas C. Schmidt's definition of the active object pattern: http://www.laputan.org/pub/sag/act-obj.pdf.
- Schmidt, Douglas et al pattern-oriented software Architecture Volume 2:patterns for Concurrent and networked Objects. Volume 2. Wiley, 2000
- Java theory and practice:decorating with dynamic proxies:http://www.ibm.com/developerworks/java/library/j-jtp08305/ Index.html
Java multithreaded Programming Mode Combat Guide: Active Object mode (bottom)