The coupling of executor and task
1, unless the thread pool is very large, a task does not rely on another task in the same thread service because it is prone to deadlock;
2, thread execution is parallel, so thread safety is a concern when designing a task. If you think it will only run in the executor of a single-task thread, it is designed to be coupled.
3, long-time tasks may affect the execution efficiency of other tasks, allowing other threads to limit the wait time while waiting. Don't wait indefinitely.
Determine the size of the thread pool
The following definitions are given:
To achieve the desired usage rate for the CPU, the thread pool size should be set to:
Approximately equals NCPU+1.
Custom thread pool
The thread pool created through executor is actually a pre-built thread pool for us, for example:
If you can't meet your needs, you can tailor it yourself:
Public Threadpoolexecutor (int// base size, no task is performed when the thread pool size int maximumpoolsize, // Maximum number of tasks Long // maximum idle time, which will be recycled after timeout timeunit Unit, blockingqueue<Runnable> workQueue) { // }
Task Queue: Cache task
As mentioned earlier, the advantage of using the thread pool is that you can limit the number of threads, so that the server is not being dragged down by excessive threads. When concurrent requests increase, we are no longer blindly creating threads. Instead, simply create a task cache into the processing queue. So the problem is that when concurrent requests continue to proliferate, the service thread cannot digest the task, causing a lot of tasks to accumulate in the queue. This also consumes the server resources.
We need to get rid of some requests when the server is really busy. To ensure that they do not rush. Should let go of the time must let go, not everyone can casually build a spaceship to send the sky. The most important thing is to protect yourself when the server really can't handle it.
When creating a thread pool, you can specify the task queue yourself:
unbounded Queue : is not set the size when the queue is created;
bounded queue : Set a size on the pass;
Synchronous Swap Queue : Synchronousqueue, strictly speaking, it's not really a queue, it's just a tool for data exchange. After the task comes, the queue directly hands the task to a thread in the thread pool. If no threads are available, a thread is created. This task will be rejected if it cannot be created. The core of the synchronous switching queue is that it does not cache tasks, either the thread pool is large enough or it is rejected directly. For scenarios where the thread pool can grow indefinitely or there will be no task surges.
You can use Priorityblockingqueue to sort tasks.
Saturation strategy: What to do if the task queue is not saved
The thread pool provides a variety of processing options:
Abort: Terminate, throw rejectedexecutionexception to the outside;
Discard: Quietly abandoned, attention is quietly, the client will never know that the task they submitted did not run at all;
Discardoldest: Discard the oldest, if using a priority queue, discarded will be the highest priority of the one (so discardoldest do not use with the priority queue).
Caller-runs: Run the task code directly with the client thread. Which thread is throwing a task at the thread pool, use this thread to run the task directly. The advantage of this is that the client shares the burden of the thread pool, and the main thing is to let the client be busy for a while before pausing to commit the task to the thread pool. To expect the thread pool to eventually slow down. The client has come to help with the thread pool, and its own business must be no one to do. If the client code is a Web request handler, then it will no longer accpet new requests, these new requests will be blocked in the TCP cache, if the system has not been slow recovered, and then continue to develop, will be "overloaded" information diffuse to the TCP call end. The subtext of the whole design is "We've done our best". The system will not have a hard landing directly hanging off, if the request pressure slows down, the system can still carry over.
Thread Factory: Customizing Threads
You can implement thread factories yourself to build thread variables to define more information that you need.
Extended Threadpoolexecutor
Threadpoolexecutor has some life cycle methods: BeforeExecute, AfterExecute, and terminated. These methods can be overridden to implement statistical and monitoring functions.
The parallelization of recursive algorithm
What kind of situation can be parallel?
For example, for the loop to do something, and these things are independent of each other. Then these things can actually be done in parallel (for loop is serial execution).
This gives us new ideas for programming, and many of the code that has been written over and over and over again has a much more interesting way of implementing it.
Java Concurrency Programming Practice reading Notes (5) Use of thread pool