Traffic control gate-LimitLatch socket connection count limiter, traffic control limitlatch
As a web server, Tomcat will process and respond to requests from each client. However, for a single machine, the total traffic of access requests has a peak and the server has a physical limit, in order to ensure that the web server is not overwhelmed, we need to take some measures for protection and prevention. We need to note that the traffic here is more of the number of socket connections, control traffic by controlling the number of socket connections. One effective method is to adopt traffic control. It is like adding a gate at the entrance of the traffic. The size of the gate determines the size of the traffic, once the maximum traffic is reached, the gate will be closed to stop receiving until there is an idle channel.
Is there any idea to create a traffic controller? Let's take a look at the concurrency framework AQS, which can be implemented by controlling the status of the synchronization machine. If you forget the knowledge about the AQS framework, please move to the preceding multi-threaded section. The specific idea is to initialize the status value first, then, each time a socket arrives, the status is incremented by 1, and each time the socket is closed, the status is reduced by 1. Once the status value is smaller than 0, the AQS mechanism will stop receiving the socket, wait until a socket is released after processing. We split the idea into two parts: one is to create a controller that supports counting, and the other is to embed the controller into the processing process.
① Controller, the whole process is carried out according to the custom synchronization mechanism recommended by AQS, but it does not use the state variable that comes with AQS. Instead, it introduces another count variable of the AtomicLong type for counting, the essence is the same, so you don't have to worry too much about it. The Controller implements the control effect through countUpOrAwait and countDown methods.
Public class LimitLatch {
Private class Sync extends actqueuedsynchronizer {
Public Sync (){}
@ Override
Protected int tryAcquireShared (int ignored ){
Long newCount = count. incrementAndGet ();
If (newCount> limit ){
Count. decrementAndGet ();
Return-1;
} Else {
Return 1;
}
}
@ Override
Protected boolean tryReleaseShared (int arg ){
Count. decrementAndGet ();
Return true;
}
}
Private final Sync sync;
Private final AtomicLong count;
Private volatile long limit;
Public LimitLatch (long limit ){
This. limit = limit;
This. count = new AtomicLong (0 );
This. sync = new Sync ();
}
Public void countUpOrAwait () throws InterruptedException {
Sync. acquireSharedInterruptibly (1 );
}
Public long countDown (){
Sync. releaseShared (0 );
Long result = getCount ();
Return result;
}
}
② The process is embedded into the controller. The pseudocode is as follows. The counter is accumulated before receiving the socket, and the data processing of the socket is handed over to another thread. It takes some time to process the data, if there are another 1000 request sockets during this time, the 1,000th requests will cause the program to stop blocking, And the wake-up condition is that the worker thread in the thread pool processes one of the sockets and executes the countDown operation. The default tomcat size is 1000.
LimitLatch limitLatch = new LimitLatch (1000 );
Create a ServerSocket;
LimitLatch. countUpOrAwait (); // blocking may occur here
Socket socket = ServerSocket. accept ();
Obtain an idle working thread from the thread pool to process the socket, close the socket after processing, and execute limitLatch. countDown ();