Tomcat, as a Web server, will respond to requests for each client, but for a single machine, the total traffic for the access request is peak and the server has physical limits, in order to ensure that the Web server is not washed away we need to take some steps to protect against The traffic here, which needs to be slightly explained, is more of a socket connection, which controls traffic by controlling the number of socket connections. One effective way is to take the flow control, it is like in the flow of the entrance to add a gate, the size of the gate determines the size of the flow, once the maximum flow will be closed to stop the gate to receive until the idle channel.
Creating a flow controller no idea? Or consider the concurrency Framework Aqs bar, by controlling the state of the Synchronizer can be achieved, if you forget the AQS framework related knowledge, please move to the previous multi-threaded chapter, the specific idea is to initialize the state value, and then each arrival of a socket will state plus 1, each closure of a socket will state minus 1, As a result, once the status value is less than 0, the AQS mechanism will stop receiving the socket until a socket is disposed of. We split the idea into two parts: one is to create a controller that supports counting, and the other is to embed the controller in the process.
① Controller, the entire process is based on the practice of Aqs recommended custom Synchronizer, but does not use the AQS comes with the state variable, but also introduces a atomiclong type of count variable for counting, its essence is the same, do not have too tangled. The controller achieves the control effect mainly through countuporawait and countdown two methods.
public class Limitlatch {
Private class Sync extends Abstractqueuedsynchronizer {
Public Sync () {}
@Override
protected int tryacquireshared (int ignored) {
Long Newcount = Count.incrementandget ();
if (Newcount > Limit) {
Count.decrementandget ();
return-1;
} else {
return 1;
}
}
@Override
protected Boolean tryreleaseshared (int arg) {
Count.decrementandget ();
return true;
}
}
Private final sync sync;
Private final Atomiclong count;
private volatile long limit;
Public Limitlatch (long limit) {
This.limit = limit;
This.count = new Atomiclong (0);
This.sync = new sync ();
}
public void Countuporawait () throws Interruptedexception {
sync.acquiresharedinterruptibly (1);
}
Public long countdown () {
sync.releaseshared (0);
Long result = GetCount ();
return result;
}
}
Ii Process embedded controller, pseudo-code as follows, before receiving the socket to accumulate counters, the data processing of the socket to another thread, it will take a period of time, if this time there are 1000 request socket, then the 1000th request will cause the STOP program blocking, The condition of awakening is that the worker threads in the thread pool process one of the sockets and perform the countdown operation. The default size of Tomcat is 1000.
Limitlatch Limitlatch = new Limitlatch (1000);
Create ServerSocket;
Limitlatch.countuporawait ();//may be blocked here
Socket socket = serversocket.accept ();
Get an idle worker thread from the thread pool to process the socket, process the closing socket and execute Limitlatch.countdown ();
Flow control Gate--limitlatch Socket Connection number limiter