Asp.net core mvc analysis: KestrelServer, mvckestrelserver
KestrelServer is a high-performance web server developed based on Libuv. Now let's take a look at how it works. The Main method of Program is mentioned in the previous article. A WebHost is built in this method. Let's take a look at the Code:
public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup<Startup>() .Build(); host.Run(); }
There is a UseKestrel method call in it. The function of this method is to use KestrelServer as the web server to provide web services. When WebHost is started, the Start method of IServer is called to Start the service. Because KestrelServer is used as the web server, KestrelServer is called here. start method. Let's take a look at the main code in the Start method of KestrelServer:
First, we found that a KestrelEngine object is created in the Start method. The specific code is as follows:
var engine = new KestrelEngine(new ServiceContext{ FrameFactory = context => { return new Frame<TContext>(application, context); }, AppLifetime = _applicationLifetime, Log = trace, ThreadPool = new LoggingThreadPool(trace), DateHeaderValueManager = dateHeaderValueManager, ServerOptions = Options });
The KestrelEngine constructor accepts a ServiceContext object parameter. ServiceContext contains a FrameFactory. It is well understood from the name, that is, the Frame factory. What is the Frame? Frame is an http request processing object. After each request comes over, it will be handed over to a Frame object for acceptance. Here we will first remember its role, and we will see how it is instantiated later. In addition to this, AppLiftTime is also an IApplicationLifetime object, which is the management object of the entire application lifecycle.
public interface IApplicationLifetime { /// <summary> /// Triggered when the application host has fully started and is about to wait /// for a graceful shutdown. /// </summary> CancellationToken ApplicationStarted { get; } /// <summary> /// Triggered when the application host is performing a graceful shutdown. /// Requests may still be in flight. Shutdown will block until this event completes. /// </summary> CancellationToken ApplicationStopping { get; } /// <summary> /// Triggered when the application host is performing a graceful shutdown. /// All requests should be complete at this point. Shutdown will block /// until this event completes. /// </summary> CancellationToken ApplicationStopped { get; } /// <summary> /// Requests termination the current application. /// </summary> void StopApplication(); }
Three time points are provided in IApplicationLifetime,
1. ApplicationStarted: The application has been started.
2. ApplicationStopping: The application is stopping.
3. ApplicationStopped: The application has stopped.
We can use the CancellationToken. Register Method to Register the callback method and execute our specific business logic at the three time points mentioned above. IApplicationLifetime is created in the Start method of WebHost. If you want to obtain this object in our own application, you can directly obtain it through dependency injection.
We continue to return to the ServiceContext object, which also contains the Log object for Log tracking. Generally, we are used to view the program execution process and find out where the program execution is faulty. It also contains a ServerOptions, which is a KestrelServerOptions, which contains service-related configuration parameters:
1. ThreadCount: number of service threads, indicating the number of service threads to be enabled after the service is started, because each request will be processed by one thread, and multithreading will increase the throughput, however, the larger the number of threads, the better. In the system, the default value is equal to the number of CPU cores.
2. ShutdownTimeout: The amount of time after the server begins shutting down before connections will be forcefully closed, during this period, the application will wait for the request to be processed. If the request has not been processed, it will be forcibly disabled)
3. Limits: KestrelServerLimits object, which contains service restriction parameters, such as MaxRequestBufferSize and MaxResponseBufferSize
Other parameters are not described one by one.
After the KestrelEngine object is created, call engine. Start (threadCount) to instantiate the service thread KestrelThread according to the configured threadcount. The Code is as follows:
public void Start(int count) { for (var index = 0; index < count; index++) { Threads.Add(new KestrelThread(this)); } foreach (var thread in Threads) { thread.StartAsync().Wait(); } }
The code above creates a specified number of Thread objects and then starts waiting for task processing. KestrelThread is the encapsulation of libuv thread processing.
After these tasks are ready, start the listening service.
foreach (var endPoint in listenOptions) { try { _disposables.Push(engine.CreateServer(endPoint)); } catch (AggregateException ex) { if ((ex.InnerException as UvException)?.StatusCode == Constants.EADDRINUSE) { throw new IOException($"Failed to bind to address {endPoint}: address already in use.", ex); } throw; } // If requested port was "0", replace with assigned dynamic port. _serverAddresses.Addresses.Add(endPoint.ToString()); }
The above red font code is the method for creating the listening service. Let's take a closer look at the details:
Public IDisposable CreateServer (ListenOptions listenOptions) {var listeners = new List <IAsyncDisposable> (); try {
// If the number of Threads created earlier is 1, directly create the listener object and start the listener if (Threads. count = 1) {var listener = new Listener (ServiceContext); listeners. add (listener); listener. startAsync (listenOptions, Threads [0]). wait ();} else {
// If the number of threads is not 1, var pipeName = (Libuv. IsWindows? @"\\. \ Pipe \ kestrel _ ":"/tmp/kestrel _ ") + Guid. newGuid (). toString ("n"); var pipeMessage = Guid. newGuid (). toByteArray (); // first create a primary Listener object. This Listenerprimary is a Listener, and the Listener socket is the var listenerPrimary = new ListenerPrimary (ServiceContext); listeners. add (listenerPrimary );
// Start listening for listenerPrimary. startAsync (pipeName, pipeMessage, listenOptions, Threads [0]). wait (); // associates a ListenerSecondary object with each remaining service thread. This object uses the name Pipe to communicate with the main listening object. After the main listening object receives the request, use pipe to send the accepted socket object to a specific thread for foreach (var thread in Threads. skip (1) {var listenerSecondary = new ListenerSecondary (ServiceContext); listeners. add (listenerSecondary); listenerSecondary. startAsync (pipeName, pipeMessage, listenOptions, thread ). wait () ;}} return new Disposable () =>{ DisposeListeners (listeners) ;}) ;}catch {DisposeListeners (listeners); throw ;}}
At this time, the service began to accept http requests. As we mentioned earlier, the listener socket is created in the Listener class (ListenerPrimary is also a listener). below is the start method of listener.
Public Task StartAsync (ListenOptions listenOptions, KestrelThread thread) {ListenOptions = listenOptions; Thread = thread; var tcs = new TaskCompletionSource <int> (this); Thread. post (state => {var tcs2 = (TaskCompletionSource <int>) state; try {var listener = (Listener) tcs2.Task. asyncState );
// Create a listener socket listener. ListenSocket = listener. CreateListenSocket ();
// Start listening. When a connection request comes over, the ConnectionCallback method ListenSocket is triggered. listen (Constants. listenBacklog, ConnectionCallback, this); tcs2.SetResult (0);} catch (Exception ex) {tcs2.SetException (ex) ;}, tcs); return tcs. task ;}
ConnectionCallback: when a connection request comes over, it is triggered. In the callback method, the connection is processed and distributed. The connection distribution code is as follows:
protected virtual void DispatchConnection(UvStreamHandle socket) { var connection = new Connection(this, socket); connection.Start(); }
This is the implementation in the listener class. We have seen that the Listener object is created for listening only when the number of threads is 1. Otherwise, the ListenerPrimary listener is created, and the ListenerPrimay method is overwritten, its implementation is as follows:
Protected override void DispatchConnection (UvStreamHandle socket ){
// The Round Robin method is used here to distribute connection requests to different threads in sequence for processing var index = _ dispatchIndex ++ % (_ dispatchPipes. count + 1); if (index = _ dispatchPipes. count ){
// Base. DispatchConnection (socket);} else {DetachFromIOCP (socket); var dispatchPipe = _ dispatchPipes [index];
// The pipe name is used to pass the socket to the specified Thread var write = new UvWriteReq (Log); write. init (Thread. loop); write. write2 (dispatchPipe, _ dummyMessage, socket, (write2, status, error, state) =>{ write2.Dispose (); (UvStreamHandle) state ). dispose () ;}, socket );}}
Okay. After connecting the request to find the processing thread, you can start processing later. The code in ListenerSecondary is complex. In fact, the following code will be called to create a Connection object.
var connection = new Connection(this, socket);connection.Start();
Connection indicates the current Connection. The following is its constructor.
Public Connection (ListenerContext context, UvStreamHandle socket): base (context) {_ socket = socket; _ connectionAdapters = context. listenOptions. connectionAdapters; socket. connection = this; ConnectionControl = this; ConnectionId = GenerateConnectionId (Interlocked. increment (ref _ lastConnectionId); if (ServerOptions. limits. maxRequestBufferSize. hasValue) {_ bufferSizeControl = new BufferSizeControl (ServerOptions. limits. maxRequestBufferSize. value, this);} // create Input and Output socket stream Input = new SocketInput (Thread. memory, ThreadPool, _ bufferSizeControl); Output = new SocketOutput (Thread, _ socket, this, ConnectionId, Log, ThreadPool); var tcpHandle = _ socket as UvTcpHandle; if (tcpHandle! = Null) {RemoteEndPoint = tcpHandle. getPeerIPEndPoint (); LocalEndPoint = tcpHandle. getSockIPEndPoint () ;}// create a processing frame. The framefactory here is the factory _ frame = FrameFactory (this) created when KestrelEngine is created. _ lastTimestamp = Thread. loop. now ();}
Call the Start method of Connection to Start processing. In this case, the processing task is directly handed over to Frame for processing. The Start method is implemented as follows:
Public void Start () {Reset ();
// Start the asynchronous processing Task to start processing _ requestProcessingTask = Task. factory. startNew (o) => (Frame) o ). requestProcessingAsync (), // specific processing method this, default (CancellationToken), TaskCreationOptions. denyChildAttach, TaskScheduler. default ). unwrap (); _ frameStartedTcs. setResult (null );}
The RequestProcessingAsync method is not described in detail. Let's take a look at the main code:
.....
// _ Application is the HostApplication mentioned in the previous article. First, call CreateContext to create the HttpContext object var context = _ application. CreateContext (this );......
// Enter the processing pipeline await _ application. ProcessRequestAsync (context). ConfigureAwait (false );......
After ProcessRequestAsync completes processing, output the result to the client. If you have any questions, please kindly advise.