Common scenario thread pool for concurrency
Concurrency is most commonly used in the thread pool, and it is clear that using a thread pool can effectively improve throughput.
The most common, more complex scenario is a thread pool of web containers. The Web container uses the thread pool to synchronize or asynchronously process HTTP requests, which can also effectively reuse HTTP connections and reduce the cost of resource requests. Often we think of HTTP requests as expensive and resource-intensive, so the thread pool plays a very important role here.
The thread pool principle and usage are discussed in detail in the section of the online pool, and it is also mentioned that the effect of the thread pool configuration and parameters on performance is enormous. Not so much, the power of the thread pool is not always growing, constrained by resources (machine performance, network bandwidth, etc.), dependent services, and responsiveness of clients. When the thread pool bottleneck is reached, both performance and throughput are significantly reduced.
Increasing the performance of the machine or increasing the number of threads does not necessarily increase the throughput effectively. In the case of high concurrency, the machine's load will increase dramatically, and the stability of the machine and the reliability of the service will decrease.
However, the thread pool is still an effective measure to improve the throughput, and the appropriate parameters can be used to make full use of resources and improve the utilization of resources.
Task Queue
The task queue is also a good concurrency tool in addition to the thread pool, which is a more cluttered concurrency scenario. There are a number of queues within the JDK that are easy to use, improve productivity, and can be combined to suit different scenarios. Even within the thread pool, the task queue is used to handle the backlog of tasks, balancing the consumption of resources.
The secure task queue effectively balances the complexity of the machine, offsetting the instability caused by peaks and fluctuations and effectively improving service reliability. The processing of task queues also contributes to the statistics and analysis of service conditions.
Task queues can also pass data between multiple threads, helping to process tasks in parallel. For example, the classic "producer-consumer" model can effectively improve the parallel processing capability of multiple threads. It is particularly effective in services with large IO latencies. One of my favorite cases is that a thread is responsible for pressing large amounts of data into a fixed-size task queue, pausing after the queue is full, and several other threads taking the data from the task queue and consuming it. This turns the serial "production-consumption" into parallel "production-consumption". Practice has proven to be extremely time-saving for task processing.
Asynchronous processing
The thread pool is also a representation of asynchronous processing, and in addition, asynchronous processing is used to improve the processing speed of the service. An example of AOP is the use of slices to log logs, and if we want to collect logs remotely, we obviously don't want to affect the service itself by collecting logs. The process of log collection is then processed asynchronously.
Today, a large number of open source components prefer to use asynchronous processing to improve the efficiency of IO, and some operations that do not require synchronous return can effectively improve throughput after using asynchronous processing.
Of course, asynchrony is not always satisfactory, and there is a corresponding problem. For example, introduce the complexity after the asynchronous design, the processing mechanism after the thread break, the processing strategy after the failure, how to produce the message faster than the consumption, how to close the asynchronous processing logic and so on when closing the program. This increases the complexity of the system.
Although a large number of services and operations are handled asynchronously, it is clear that a security mechanism is required to guarantee the logical correctness of asynchronous processing. Using asynchronous processing is the right choice if you think that the task of asynchronous processing is not particularly important, or that the primary business cannot crash because of a logical error in the subordinate business.
Synchronous operation
Concurrent operation also need to maintain the consistency of data, more or less involves synchronous operation. Proper use of atomic operations and proper use of exclusive and read-write locks is also a big challenge.
Coordination and communication between threads, especially state synchronization, are more difficult. We see the implementation of thread pool threadpoolexecutor in order to solve the execution state of various threads and introduce many synchronous operations. In more and more threads, the cost of synchronization is getting higher, and it is possible to introduce deadlocks.
However, multithreaded synchronization within a single JVM is relatively easy to control. The JDK also provides a number of tools to facilitate data synchronization. such as Lock/condition/countdownlatch/cyclicbarrier/semaphore/exchanger and so on.
Distributed locks
distributed concurrency problems are more difficult to handle, depending on the principle of CAP basically does not have a flawless solution. Distributed resource coordination using distributed locks is a good choice. Google's distributed lock (built on top of BigTable),Zookeeper's distributed lock , or even simply take advantage of Memcache 's add operation or Redis The setnx operation to establish a pseudo-distributed lock can also solve similar problems.
In layman's Java Concurrency (38): Concurrent Summary Part 2 common concurrency scenarios [go]