"Java Concurrency Programming" 19: Concurrent new features-executor framework and thread pool (including code) __ algorithm

Source: Internet
Author: User
Tags terminates

Reprint Please indicate the source: http://blog.csdn.net/ns_code/article/details/17465497


Introduction to the executor framework

After Java 5, concurrent programming introduced a new stack of APIs to start, dispatch, and manage threads. The executor framework, which is introduced in Java 5, uses a thread pool mechanism within the java.util.cocurrent package that controls the startup, execution, and shutdown of threads, simplifying concurrent programming operations. Therefore, after Java 5, starting a thread through executor is better than using thread's Start method, except that it is easier to manage, more efficient (with thread pooling, and cost savings), and a key point: to help avoid this escape problem- If we start a thread in the constructor, because another task might start executing before the constructor ends, it is possible to access half of the initialized objects with executor in the constructor.


Executor Framework includes: thread pool, executor,executors,executorservice,completionservice,future,callable, etc.


The executor interface defines a method execute (Runnable command) that receives a runable instance that is used to perform a task that is a class that implements the Runnable interface. The Executorservice interface inherits from the executor interface, providing a richer way to implement multithreading, for example, Executorservice provides methods for shutting itself down, and a way to generate Future for tracking one or more asynchronous task execution conditions. You can call the Executorservice shutdown () method to smooth the executorservice off, and when invoked, it causes Executorservice to stop accepting any new tasks and wait for the committed task to finish ( The tasks that have been committed will be divided into two categories: one that is already executing and one that has not yet been executed, and the Executorservice will be closed when all the committed tasks have been completed. Therefore, we generally use this interface to implement and manage multithreading.


The Executorservice lifecycle consists of three states: Run, close, terminate. After it is created, it enters the running state, and when the shutdown () method is invoked, it enters the shutdown state, which means that the Executorservice no longer accepts the new task, but it is still performing the task that has been committed, and arrives at the end state when the task that has already been committed has been executed. If you do not invoke the shutdown () method, Executorservice will always be in the running state, receiving new tasks, performing new tasks, and the server side generally does not need to shut it down and keep running.



Executors provides a series of factory methods for the first thread pool, and the returned thread pool implements the Executorservice interface.

public static Executorservice newfixedthreadpool (int nthreads)

Creates a thread pool of fixed number of threads.

public static Executorservice Newcachedthreadpool ()

Creates a cacheable thread pool, and calling execute reuses the previously constructed thread (if the thread is available). If an existing thread is not available, create a new thread and add it to the pool. Terminates and removes threads from the cache that have not been used for 60 seconds.

public static Executorservice Newsinglethreadexecutor ()

Create a single-threaded executor.

public static Scheduledexecutorservice newscheduledthreadpool (int corepoolsize)

Create a thread pool that supports timed and periodic task execution, which in most cases can be used instead of the timer class.


These four methods are used in the executors of the threadfactory established threads, below on the above four methods to make a comparison




Newcachedthreadpool ()                                                         &NB Sp                                   &NB Sp                                   &NB Sp      &NBSP

-cache pool, first to see if there are any previously established threads in the pool, and if so, reuse. If not, build a new thread to join the pool
- A cached pool is typically used to perform some short-lived asynchronous task
  So it is not used much in some connection-oriented daemon servers. But for the asynchronous task with short lifetime, it is the first choice of executor.
-a thread that can be reuse must be a timeout within a pool of idle, the default     timeout is 60s, longer than this idle, and the thread instance will be terminated and removed from the pool.
   Note that threads placed in Cachedthreadpool do not have to worry about their end, more than timeout inactive, and that they are automatically terminated.



Newfixedthreadpool (int)

-newfixedthreadpool and Cachethreadpool are similar, but also can be used reuse, but can not build a new thread at any time
-Its uniqueness: At any point in time, only a fixed number of active threads exist, at which point if a new thread is to be established, it can only be placed in another queue until a thread in the current thread terminates and is moved directly out of the pool.
-unlike Cachethreadpool, Fixedthreadpool has no idle mechanism (possibly also, but since the document is not mentioned, it must be very long, similar to relying on the upper layer of TCP or UDP idle mechanism, etc.), so Fixedthreadpool is mostly for some very stable, regular concurrent threads, mostly for servers
-From the source of the method, the cache pool and the fixed pool call the same underlying pool, except that the parameters are different:
Fixed pool thread count, and is 0 seconds idle (no idle)
Cache Pool Threads Support 0-integer.max_value (apparently not considering host resource affordability), 60 seconds idle

Newscheduledthreadpool (int)

-Scheduling thread pool
-The thread in this pool can be schedule sequentially delay execution, or cycle execution

Singlethreadexecutor ()

-Single thread, only one thread in any time pool
-Use the same bottom pool as the cache and fixed pools, but the number of threads is 1-1, 0 seconds idle (no idle)


In general, Cachedtheadpool typically creates the same thread as the desired number of threads during program execution, and then stops creating new threads when it reclaims old threads, so it is a reasonable executor choice. You need to consider using Fixedthreadpool only if this method raises a problem, such as when you need a large number of threads that are connected for a long time. (The phrase is excerpted from the fourth edition of Thinking in Java)

Executor perform runnable tasks

The Executorservice instance is obtained by executors the above four static factory methods, and then the Execute (Runnable command) method of the instance is invoked. Once the runnable task is passed to the Execute () method, the method is automatically executed on a single thread. Here is the sample code for executor to perform the runnable task:

Import Java.util.concurrent.ExecutorService; 
Import java.util.concurrent.Executors; 

public class testcachedthreadpool{public 
	static void Main (string[] args) { 
        Executorservice executorservice = Executors.newcachedthreadpool ();      Executorservice executorservice = Executors.newfixedthreadpool (5);		Executorservice executorservice = Executors.newsinglethreadexecutor ();
        for (int i = 0; i < 5; i++) { 
			executorservice.execute (new testrunnable ()); 
			System.out.println ("************* a" + i + "*************"); 
		} 
        Executorservice.shutdown (); 
	} 

Class Testrunnable implements runnable{public 
	void Run () { 
		System.out.println thread.currentthread (). The GetName () + "thread was invoked. "); 
    } 
}
The results of one execution are as follows:


As you can see from the results, pool-1-thread-1 and pool-1-thread-2 are called two times, which is random, and execute will first select an existing idle thread in the thread pool to perform the task, if there are no idle threads in the threads pool, It creates a new thread to perform the task.


Executor perform callable tasks

After Java 5, tasks are divided into two classes: one is the class that implements the Runnable interface, the other is the class that implements the callable interface. Both can be executed by Executorservice, but the runnable task does not return a value, and the callable task has a return value. and the callable call () method can only be performed through the Executorservice submit (callable<t> Task) method, and returns a <t>future<t> Is the Future that represents the task waiting to be completed.


The callable interface is similar to runnable, both of which are designed for classes whose instances might be executed by another thread. However, Runnable does not return a result and cannot throw a checked exception and callable return the result, and may throw an exception when getting the result returned. The call () method in callable is similar to the runnable run () method, and the difference is the return value, which is not.


When a callable object is passed to the Executorservice submit method, the call method is automatically executed on a thread and returns the execution result future object. Similarly, the Runnable object is passed to the Executorservice submit method, the Run method is automatically executed on a thread, and the execution result future object is returned, but the Get method is called on the future object, and Null is returned.


Here is a sample code that executor perform the callable task:

Import java.util.ArrayList; 
Import java.util.List; 

Import java.util.concurrent.*; public class callabledemo{public static void Main (string[] args) {Executorservice Executorservice = EXECUTORS.NEWCAC 
		Hedthreadpool (); 

		list<future<string>> resultlist = new arraylist<future<string>> (); Create 10 tasks and execute for (int i = 0; i < i++) {////Executorservice perform callable type of task and save the result in Future variable future<str 
			ing> future = Executorservice.submit (new Taskwithresult (i)); 
		Store the results of the task execution in the list Resultlist.add (future); }//traverse the result of the task for (future<string> fs:resultlist) {try{while (!fs.isdone);//future return if not completed, loop wait     Until future returns to completion System.out.println (Fs.get ()); 
				Print the results of each thread (Task) execution}catch (interruptedexception e) {e.printstacktrace (); 
				}catch (executionexception e) {e.printstacktrace (); 
				}finally{//Start a sequential shutdown, perform previously submitted tasks, but do not accept new tasks executorservice.shutdown (); {}}} class TaskwithrEsult implements callable<string>{private int id; 
	public taskwithresult (int id) {this.id = ID;
		}/** * The specific process of the task, once the task is passed to the Executorservice submit method, * The method automatically executes on a thread/public String call () throws Exception {    System.out.println ("Call () method is automatically called ... 
		"+ Thread.CurrentThread (). GetName ()); 
	The return result will be future The Get method returns "Call () method is automatically invoked, and the result returned by the task is:" + ID + "" + Thread.CurrentThread (). GetName (); } 
}
The results of one execution are as follows:


As can be seen from the results, submit also selects the idle thread to perform the task first, and if not, creates a new thread to perform the task. Also, note that if the return of future is not completed, the Get () method blocks the wait until future completes the return, and the Isdone () method can be used to determine whether future completed the return.



The custom thread pool custom thread pool, which can be created with the ThreadPool executor class, has multiple construction methods to create a thread pool, and it is easy to implement a custom thread pool with this class, which first pastes the sample program:

Import Java.util.concurrent.ArrayBlockingQueue; 
Import Java.util.concurrent.BlockingQueue; 
Import Java.util.concurrent.ThreadPoolExecutor; 

Import Java.util.concurrent.TimeUnit;  public class threadpooltest{public static void Main (string[] args) {//Create wait queue blockingqueue<runnable> Bqueue 
		= new arrayblockingqueue<runnable> (20); Creates a thread pool with a number of 3 threads saved in the pool, maximum allowable number of threads 5 threadpoolexecutor pool = new Threadpoolexecutor (3,5,50,timeunit.milliseconds,bqueue) 
		; 
		Create seven tasks Runnable t1 = new Mythread (); 
		Runnable t2 = new Mythread (); 
		Runnable t3 = new Mythread (); 
		Runnable T4 = new Mythread (); 
		Runnable T5 = new Mythread (); 
		Runnable T6 = new Mythread (); 
		Runnable t7 = new Mythread (); 
		Each task executes Pool.execute (t1) on a single thread; 
		Pool.execute (T2); 
		Pool.execute (T3); 
		Pool.execute (T4); 
		Pool.execute (T5); 
		Pool.execute (T6); 
		Pool.execute (T7); 
	Close the thread pool Pool.shutdown (); The class Mythread implements runnable{@Override public void Run () {System.Out.println (Thread.CurrentThread (). GetName () + "executing ... 
		"); 
		try{Thread.Sleep (100); 
		}catch (interruptedexception e) {e.printstacktrace (); } 
	} 
}
The results of the operation are as follows:

As you can see from the results, seven tasks are performed on three threads on the online pool. Here is a brief description of the meaning of each parameter in the construction method of the Threadpoolexecuror class used below.

Public threadpoolexecutor (int corepoolsize, int maximumpoolsize, long keepalivetime, Timeunit Unit,blockingqu Eue<runnable> workqueue)

Corepoolsize: The number of core threads saved in the thread pool, including idle threads.

Maximumpoolsize: The maximum number of threads allowed in the pool.

KeepAliveTime: The maximum length of time that an idle thread in a thread pool can last.

Unit: Units of duration.

Workqueue: Saves the queue of tasks before the task executes, saving only the runnable tasks that are submitted by the Execute method.

According to the Threadpoolexecutor code, we can see that when an attempt is made to add a runnable task to the thread pool through the Excute method, it is handled in the following order:

1. If there are fewer threads in the thread pool than corepoolsize, even if there are idle threads in the thread pool, a new thread is created to perform the newly added task;

2. If the number of threads in the thread pool is greater than or equal to corepoolsize, but the buffer queue Workqueue is not full, the newly added task is placed in the workqueue. Follow the principle of FIFO to wait for execution (in the thread pool, after the threads are idle, to deliver the tasks in the buffer queue to the idle thread execution in turn);

3. If the number of threads in the thread pool is greater than or equal to corepoolsize and the buffer queue Workqueue full, but the number of threads in the thread pool is less than maximumpoolsize, a new thread is created to handle the added task;

4. If the number of threads in the thread pool equals maximumpoolsize, there are 4 ways to handle it (the constructor calls the constructor with 5 parameters and the last constructor is the Rejectedexecutionhandler type. It has 4 ways to handle the thread overflow, here no longer elaborate, to understand, you can read the source code.

To sum up, that is to say, when there are new tasks to process, first look at the thread pool of the number of threads is greater than corepoolsize, and then see if the buffer queue Workqueue full, and finally see if the thread pool in the number of threads is greater than maximumpoolsize.

In addition, when the number of threads in the thread pool is greater than corepoolsize, it is removed from the thread pool if the thread is idle longer than the KeepAliveTime, so that the number of threads in the pool can be dynamically adjusted.

We generally look at the source code of executors, Newcachedthreadpool without Rejectedexecutionhandler parameters (that is, the fifth parameter, the number of threads more than Maximumpoolsize, Specify how the processing is structured as follows:

    public static Executorservice Newcachedthreadpool () {return
        new Threadpoolexecutor (0, Integer.max_value,
                                      60L , Timeunit.seconds,
                                      new synchronousqueue<runnable> ());
    
It sets the corepoolsize to 0, and the maximumpoolsize is set to the maximum number of integers, the thread is idle for more than 60 seconds and will be removed from the thread pool. Because the core thread count is 0, every time you add a task, you'll find an idle thread from the thread pool, and if not, a thread will be created (synchronousqueue<runnalbe>, later said) to perform the new task and add the thread to the thread pool. The maximum allowable number of threads is the maximum value of integer, so this thread pool theory can be expanded continuously.

Then look at the Newfixedthreadpool construction method without the Rejectedexecutionhandler parameter, as follows:

    public static Executorservice newfixedthreadpool (int nthreads) {return
        new Threadpoolexecutor (Nthreads, Nthreads,
                                      0L, Timeunit.milliseconds,
                                      new linkedblockingqueue<runnable> ());
    
It will corepoolsize and Maximumpoolsize are set to Nthreads, so that the size of the thread pool is fixed, will not dynamically expand, in addition, KeepAliveTime set to 0, which means that the thread as long as the idle down, will be removed from the thread pool, dare to Linkedblockingqueue below will say.

Here are some strategies for queuing:

1, direct submission. The buffer queue employs synchronousqueue, which gives the task directly to the thread and does not hold them. If there is no thread that can be used to run the task immediately (that is, the threads in the thread pool are working), attempting to join the task in the buffer queue will fail, so a new thread is constructed to handle the newly added task and add it to the thread pool. Direct submissions typically require unbounded maximumpoolsizes (integer.max_value) to avoid rejecting newly submitted tasks. This strategy is what Newcachedthreadpool uses.

2, unbounded queues. The use of unbounded queues (typically the linkedblockingqueue of predefined capacity, which theoretically means that the buffer queue can queue for an unlimited number of tasks) will result in all Corepoolsize

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.