Java Multithreading--JDK and contract (2)

Source: Internet
Author: User
Tags volatile

Java multithreaded--jdk and contract (2) thread pool

After using the thread pool, the creation thread becomes the free thread from the line constructor, and the shutdown thread becomes a bad thread to the thread pool.

The JDK has a executor framework, which presumably includes members of executor, Executorservice, Abstractexeccutorservice, Threadpoolexecutor, and executors, located in java.util.concurrent The package. The relationship between them is as follows:

Executor is the top-level interface, Executorservice interface inherits it, Abstrctexecutorservice inherits Executorservice, Threadpoolexecutor inherited the Abstrctexecutorservice. If the <—— implementation interface is represented by representing inheritance, <-- their relationship can be expressed as follows:

Executor(接口) <—— ExecutorService(接口) <-- AbstrctExecutorService(抽象类) <—— ThreadPoolExecutor(类)

Executors is a separate class that can be seen as a "thread pool factory" with many static methods, such as:

    • Newfixedthreadpool (int nthread)
    • Newsinglethreadexecutor ()
    • Newcachedthreadpool ()
    • Newsinglethreadscheduledexecutor ()
    • Newscheduledthreadpool (int corepoolsize)

Newfixedthreadpool the method returns a thread pool with a fixed number of threads . When a new task is committed, if an idle thread in the thread pool executes immediately, it will go into the task queue and wait until there are idle threads to execute.

Newsinglethreadexecutor, the method returns only one thread pool, and the processing policy is the same as above. In fact, the above parameter is specified as 1.

Newcachedthreadpool the method returns a thread pool that can adjust the number of threads according to the actual situation , and if any idle threads can be reused after the task is committed, the reuse is preferred. If the threads in the thread pool are all working, and there are new tasks, a new thread is created to handle the task, and the thread is returned to the thread pool after all the threads have finished executing.

Newscheduledthreadpool returns a Scheduledexecutorservice object that can perform a scheduled task, such as starting after a delay or performing a task periodically. You can specify the number of threads.

The Newsinglethreadscheduledexecutor implements the same functionality as above, but the thread pool size is 1.

Scheduledexecutorservice has three ways to perform tasks in a planned manner. Such as:

    • schedule(Runnable command, long delay, TimeUnit unit);The method can perform a task after a given delay.
    • scheduleAtFixedRate(Runnable command,long initialDelay,long period,TimeUnit unit);The method takes the task to start execution time is initialdelay, plus the period period, is the next task to start the execution time, and so on, therefore this method task dispatch frequency is certain;
    • scheduleWithFixedDelay(Runnable command,long initialDelay,long delay,TimeUnit unit);This method indicates that each task is completed after a delay, the next task is started, or the initial delay of the start of the task, the difference between the point at which the delay initialDelay Previous task ended and the point at which the next task started is fixed and fixed to delay.

Even if the execution time of a single task exceeds the scheduling period, Scheduleatfixedrate will not allow multiple tasks to be stacked, such as task execution needs 8s, while the scheduling period is 2s, the second task is scheduled, the first one is not finished, so in order to avoid the task stack, the scheduling cycle will become 8s While using Schedulewithfixeddelay, the actual interval between the two tasks becomes the delay of the 10s,8s execution +2s.

Internal implementation of the thread pool
    • Newfixedthreadpool (int nthread)
    • Newsinglethreadexecutor ()
    • Newcachedthreadpool ()

These three internally are generated by returning Threadpoolexecutor to the thread pool. So let's focus on how it's structured.

public ThreadPoolExecutor(    int corePoolSize,    int maximumPoolSize,    long keepAliveTime,    TimeUnit unit,    BlockingQueue<Runnable> workQueue,    ThreadFactory threadFactory,    RejectedExecutionHandler handler)
    • Corepoolsize indicates the number of threads in the thread pool;
    • Maximumpoolsize indicates the maximum number of threads in the thread pool;
    • KeepAliveTime indicates the lifetime of the extra idle thread when the number of threads exceeds corepoolsize;
    • Unit is KeepAliveTime.
    • Workqueue task queue, save tasks that have been committed but have not yet started (waiting for idle threads);
    • Threadfactory, Thread Factory, customizable, general default;
    • Handler rejection policy, how to reject a task when the task is too late to process.

Workqueue is an object of the Blockingqueue interface that holds the Runnable object. Depending on the functionality, the following blockingqueue can be used in Threadpoolexecutor

    • Directly submitted queue: corresponds to the Synchronousqueue object, it has no capacity, each insert waits for a corresponding delete operation, each delete operation waits for the corresponding insert operation. With this object, the submitted task is not actually saved, but the task is always handed to the thread for execution. If a new thread is created without an idle thread, the Deny policy is executed if the number of threads has reached the maximum value .
    • Bounded task queue: implemented using Arrayblockingqueue. When there is a task commit, determine the current number of actual threads in the thread pool, if it is less than corepoolsize, the new thread will be created first, and if it is greater than corepoolsize, the task is added to the wait queue; If the waiting queue is full, create a new thread If the actual thread has reached Maxpoolsize, the Deny policy is started . It can be seen that a bounded task queue creates a new thread only when the task queue is full, and typically the actual number of threads can be stabilized at corepoolsize.
    • Unbounded task queue: implemented using Linkedblockingqueue. Compared to the above arrayblockingqueue, the difference is that the task queue has no size limit , and when the actual number of threads exceeds corepoolsize, it goes directly to the task queue.
    • Priority task queue: implemented using Priorityblockingqueue. Several of the preceding tasks are processed in FIFO order, and the task queue that the object implements can be executed according to the priority order of the task itself .

Newfixedthreadpool because of its corepoolsize and maxpoolsize size, fixed-size threads do not exist when the actual number of threads exceeds corepoolsize and the possibility to add new threads, So it uses linkedblockingqueue, and when there is a new task and the actual number of threads has reached its maximum, it goes straight to the waiting queue.

Newsinglethreadexecutor is a special case of Newfixedthreadpool, that is to take corepoolsize and Maxpoolsize are 1

and Newcachedthreadpool corepoolsize for 0,maxpoolsize, the Integer.MAX_VALUE task queue using Synchronousqueue directly submitted, after the new task is submitted, if there is idle thread directly to use, if not entered the waiting queue- -but this is a direct commit queue, and all new threads will execute the task! Because Corepoolsize is 0, the 60s (constructor designation) is recycled after the task is executed.

Deny Policy

What is the strategy to take when the actual number of threads exceeds maxpoolsize?

    • AbortPolicy: Discards the task and throws an exception;
    • Callerrunpolicy: The task is rejected by the thread pool and executed by the threads calling the Execute method;
    • Discardoldestpolicy: Discard the oldest one, that is, a task to be performed immediately;
    • Discardpolicy: Silently discarding a rejected task, embodied in the code is nothing to do.

Let's see how Callerrunpolicy refused.

publicvoidrejectedExecution(Runnable r, ThreadPoolExecutor e) {        if (!e.isShutdown()) {            r.run();        }    }

That's what Discardoldestpolicy did.

publicvoidrejectedExecution(Runnable r, ThreadPoolExecutor e) {        if (!e.isShutdown()) {            e.getQueue().poll// 最老的一个请求在队列头部            e.execute(r);        }    }
Thread Creation-Thread factory

Threadfactory has only one method Thread newThread(Runnable r); , and the thread in the thread pool is created by it.

Fork/join Frame

Fork is the meaning of branching, bifurcation, you can break down the big task into small tasks, the join means waiting, must wait for the small task after the fork to complete, the execution of the partial results, to the partial results can be combined into the final result.

For example, calculate 1 to 10000 of the and, can be divided into 10 branches, each branch calculates 1000 number of the and, get a part and, wait for the 10 parts and the results are calculated, and finally merge all of them to get the final result.

Typically a physical thread needs to handle multiple logical tasks, so each thread has a queue of tasks. If the task of thread A is finished, B has a lot of tasks not executed, at this point a will "help" B to perform its task,a help B to perform the task of B, take the data from the tail of the queue, and b take the data from the queue head while doing the task , it is like two pointers one to move to the left to move Avoid the competition between A and b for data.

There is a forkjoinpool in the JDK, there is a method for this interfacepublic <T> ForkJoinTask<T> submit(ForkJoinTask<T> task)

Forkjointask support fork() and join() method, it has two important subclasses, there is no return value of the recursiveaction and has a return value of Recursivetask, they all have a method compute() , in this method to do the main calculation. For Recursiveaction, the signature is void, and for Recursivetask there is a return value so the signature is<T>

JDK concurrency Container
    • Concurrenthashmap: Efficient concurrency HashMap, can be regarded as thread-safe hashmap;
    • Copyonwritearraylist: Read-read, read-write will not block, only in write-write will be synchronized. The performance is very good on the occasions of reading and writing less;
    • Concurrentlinkedqueue: Efficient concurrent queue, linked list implementation, the use of CAS operations (Compare and Swap), can be regarded as thread-safe linkedlist;
    • Blockingqueue: interface, implementation of the queue, array implementation of the Arrayblockingqueue and list implementation of LINKEDBLOCKINGQUEUE implementation of this interface.
    • CONCURRENTSKIPLISTMAP: Map implemented using the data structure of the jump table .
Copyonwritearraylist principle

Copyonwritearraylist principle is: Read the normal reading, write-write need to synchronize, so before writing to use lock, and then in order to read-write not blocking, copyonwritearraylist in the write operation, first copy the original array, Then append the value you want to add at the end of the new array, and then overwrite the old array with the new array after the addition succeeds .

The Add method for this class in the JDK is implemented in this way:

public  boolean   (e e) {final  reentrantlock lock = this . ; //guarantees write-write blocking, so synchronize  lock. lock  (); try  {object[] elements = getarray  (); int  len = elements. length ; //key! Before writing, assign a copy of  object[] newelements = Arrays. copyof  (Elements, Len + 1 ); //add  newelements[len] = e at the end of the new array; //new array overwrite old array  setarray  (newelements); return  true ; } finally  {lock. (); }}

And the definition of the array is this:

privatetransientvolatile Object[] array;

Note that the volatile keyword indicates that when the thread that writes the data modifies the array, the other read threads can "perceive" it immediately.

Blockingqueue principle

Blockingqueue can transmit data efficiently in a concurrent environment, essentially a queue in which data is entered from the end of the queue and out of the queue head. The queues all have offer() and pull() will not say, nothing special. Blockingqueue put() and take() methods, it is these two methods to achieve the blocking.

In Arrayblockingqueue: When the queue is empty, the Take () method waits until the queue is not empty, and when the queue is full, the put () method waits until the queue has an idle position. How is this going to be achieved? Look at the code

/** Main lock guarding all access */final ReentrantLock lock;/** Condition for waiting takes */privatefinal Condition notEmpty;/** Condition for waiting puts */privatefinal Condition notFull;

First read and write are used with the same lock locks, so any time read and write can only have one in the execution . Then is the condition Notnull, waiting for non-full, so that Put;notempty waits for non-empty so that take.

 Public void put(e)throwsinterruptedexception {Checknotnull(e);FinalReentrantlock lock = This.Lock; Lock.lockinterruptibly();Try{//Key, if the queue is full, wait         while(count = = items.)length) Notfull.await();Enqueue(e); }finally{lock.Unlock(); }}Private void Enqueue(E x) {Finalobject[] Items = This.Items; Items[putindex] = x;if(++putindex = = items.length) Putindex =0; count++;//Key! Once the data is inserted, the queue is not non-empty, so the thread waiting on the Notempty is awakened (notifying other threads that it can take it)Notempty.Signal();} PublicE Take()throwsinterruptedexception {FinalReentrantlock lock = This.Lock; Lock.lockinterruptibly();Try{//Key! If the queue is empty, wait for         while(Count = =0) Notempty.await();return dequeue(); }finally{lock.Unlock(); }}PrivateEdequeue() {Finalobject[] Items = This.Items;@SuppressWarnings("Unchecked") E x = (e) items[takeindex]; Items[takeindex] =NULL;if(++takeindex = = items.length) Takeindex =0; count--;if(Itrs! =NULL) Itrs.elementdequeued();//Key! There are elements out of the queue, waiting for the thread on the notfull can be awakened, can be put operationNotfull.Signal();returnx;}

Linkedblockingqueue and Arrayblockingqueue are similar in principle, but linkedblockingqueue read and write each with a lock, so reading and writing can be done at the same time.

/** Lock held by take, poll, etc */privatefinalnew ReentrantLock();/** Wait queue for waiting takes */privatefinal Condition notEmpty = takeLock.newCondition();/** Lock held by put, offer, etc */privatefinalnew ReentrantLock();/** Wait queue for waiting puts */privatefinal Condition notFull = putLock.newCondition();
Skip Table

Concurrentskipmap is implemented using a jump table . is a data structure that can be quickly found, with a time complexity of $o (LG N) $

Jump table Image Point said like a "right triangle pyramid", each layer is a linked list, the bottom of the list contains all the data in the map, each layer is a subset of the next layer, the more nodes on the above node less. The layer is linked to the layer by an element of the same value, so that the node has a right that points to the next node in the layer, and a down (actually represented by the data structure index) to the element with the same value in the lower level. In addition, the elements of all linked lists in a hop table are sorted .

Find, first find from the top level, if found to the end, otherwise, when the found value is greater than the current level of the maximum value (the end of the list), it will "jump" to the next level of the list and then forward to find, looking toward the following and the right two direction, a bit like "Down stairs" ...

by @sunhaiyu

2108.4.26

Java Multithreading--JDK and contract (2)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.