Review
In the previous article, we split the ServerSocket into Connector, Processor, Request, Response four parts. The main function of simple serversocket is implemented. Connector is responsible for serversocket creation and socket access, processor is responsible for request resolution, requests and resposne corresponding input and output respectively. This article further optimizes the service container by studying the default connectors in Tomcat 4.
"In-depth analysis of Tomcat notes" the third basic container model
This article git Source: Default connector connector analysis
The Tomcat connector contains two parts of connector and processor. In the previous project, connector and processor are 1:1 relationships, and in this form the number of requests can only be 1 in the actual process. This approach is parsed, and the overall processing time is necessarily determined by timing (connecter) + times (Processor).
Overall parsing time
The first conceivable solution is to use multithreading to handle connector. Because connector is responsible for accessing ServerSocket socket links, you can create multiple connector threads, mapping multiple ports separately, and you need port mapping, a technique that Weibo used earlier, known as MPSS (multiport single Server single service multi-Interface), now virtualization technology is the development of this technology.
Multi-connector parsing
The second scheme, multithreading processing processor. It is not difficult to find that in the whole parsing process consumes more time is the processor parsing process.
Processor resolution is divided into:
1.RequestHead processing, including Requestmethod, URL and HTTP protocol version number;
2. Forwarding the corresponding parser (static resource/servlet);
3.RequestParams parsing;
4. Static resource loading/dynamic analysis, requestbody analysis;
5.Response Package
Processor parsing process, although we have adopted the idea of lazyload, the parsing process is split into three steps (above 1, 3, 4), but IO compared to socket access and other analytic process, occupy a large proportion of time consumption. Therefore, it can save a lot of parsing time by multithreading optimization processor.
Multi-processor parsing
Programme |
Advantages |
Disadvantage |
how Connector |
Achieve better robustness with simple universal multiport performance |
Low resource utilization additional configuration port mappings |
How processor |
High flexibility of resource utilization and robust configuration |
Achieve high cost single port bottleneck performance optimization debugging cost |
Connector Model Design
Tomcat mainly uses the second scheme for optimization. Using the second scheme, first of all to the processor instance management, to complete the work include: (1) in the design, the processor is stateless, so you can start more than one processor thread to parse, processor resolution after the end does not release instances, to achieve reuse. To manage multiple processor, it is easy to think of solving multiple instance management problems by pooling model. (2) The processor in the pool is divided into two states, the runtime/wait, the waiting state can access the request, the runtime can not access the request, after the completion of the runtime can accurately return the results
Tomcat configuration file < Connector > tag, TOMCAT6 has been this use maxthreads,minsparethreads for pool Model Management, for processor module will have a unified scheduling optimization. In the old version of Tomcat 4 o'clock, the processor pool model was used to control scheduling using Minprocessors,maxprocessors.
Connector->minsparethreads
The maximum number of request processing threads to is created by this Connector, WH Ich therefore determines the maximum number of simultaneous requests that can be handled. If not specified, the This attribute was set to 200. If an executor was associated with this connector, this attribute is ignored as the connector would execute tasks using the Executor rather than an internal thread pool. Note ' If ' an ' executor is configured ' the any value set to this attribute would be recorded correctly but it would be reported (e.g. via JMX) as-1 to make clear, it is not used.
Connector->maxthreads
The maximum number of request processing threads to is created by this Connector, which therefore determines the maximum n Umber of simultaneous requests can be handled. If not specified, the This attribute was set to 200. If an executor was associated with this connector, this attribute is ignored as the connector would execute tasks using the Executor rather than an internal thread pool. Note ' If ' an ' executor is configured ' the any value set to this attribute would be recorded correctly but it would be reported (e.g. via JMX) as-1 to make clear, it is not used.
From Tomcat 7 Config connector Pool model
The Tomcat default connector uses Java.util.Stack for pool Model Management, mainly for instantiation, recycling, and retrieval.
Conncter.java protected int minprocessors = 5;//min processors protected int maxprocessors = 20;//Max processors
protected int curprocessors;//Current processors protected stack
Connector Task delegation Model
To implement processor asynchronous parsing, processor inherits the Runable interface. Specifically implemented as follows:
Thread synchronization obeject
Private Object threadsync = new Object ();
/**
* Multi-threaded Implementation
* If avaliable have new request Access
* If no avaliable no new request access
/@Override public
void Run () {
while (!stopped) {
Socket socket = await ();
if (socket = = NULL)
continue;;
process (socket);
Httpconnector.recycle (this);
}
Wake
Synchronized (threadsync) {
threadsync.notifyall ();
}
}
Connector-processor uses the task delegation model to implement the analytic control.
In the task delegation model, the task is in two states running/waiting.
/**
* Work Delegate
* @param socket
/public synchronized void assign (socket socket) {
/**
* To determine if a new socket is plugged
in * if there is a new socket, enter work wait for
/while
(available) {
try {waiting
();
} catch ( Interruptedexception e) {
}
}
this.scoket = socket;
Available = true;
Notifyall ();
}
/**
* Work waiting
* @return
/private synchronized socket await () {
/**
* If no new Socket access
* Wait for/while
(!available) {
try {waiting
();
} catch (Interruptedexception e) {
E.printstacktrace ();
}
Socket socket = This.scoket;
Available = false;
Notifyall ();
return socket;
Here we can also implement thread pooling management through Fixedthreadpool and further optimization.
But fundamentally, the task delegation model is a pseudo asynchronous interaction model
Although it is possible to implement connector and processor separation through the pool model and to control the number of requests waiting through the backlog, there are fatal flaws in the pseudo asynchronous interaction. From the principle of implementation, read and write operations that are implemented by blocking sockets are synchronized blocking. Results directly caused by:
1. Service-side processing is slow, write write out a single channel block;
2. A thread using pseudo-asynchronous IO is reading the response of a failed service node, and because the read input stream is blocked, it will be synchronized to block 60S;
3. If all available threads are blocked by the failed server, all subsequent IO messages will be queued in the queue;
4. Because the thread pool is implemented by blocking queues, subsequent queued operations are blocked when the queue is full;
5. As the front-end has only one connector thread to receive client access, it is blocked by the online pool after the synchronization blocking queue, the new client request message will be rejected, the client will be a large number of connection timeout;
6. Since almost all connections have timed out, the caller will assume that the system has crashed and cannot receive new request messages.
Reference directory:
"Netty authoritative Guide"--pseudo-asynchronous IO programming
Java Network IO Programming Summary (BIO, NIO, AIO all contain complete instance code)