Netty boss and worker "server chapter"

Source: Internet
Author: User
Tags throwable

Netty boss and worker "server chapter"

Recently in summary Dubbo about Netty Communication, so also take this opportunity in-depth experience a bit netty. Generally starting the server side of the Netty will be set two Executorservice objects, we are accustomed to use Boss,worker two variables to reference the two objects, so from the beginning of my contact with Netty has the concept of boss and worker. This blog will introduce you to bosses and workers, but it's not about other parts of Netty.

In Netty there is a boss, he opened a company (open a service port) to provide business services, it has a group of people to do things workers. Boss has been promoting the company's business, and accept the need for customers (client), when a customer found boss said that need his company's business, boss will arrange a worker for the customer, The worker is serving the customer (Read/write) throughout the journey. If the company is busy, a worker may serve multiple customers. This is the relationship between boss and worker in Netty. Let's see how Netty can help boss and worker.

protected void Doopen () throws Throwable {nettyhelper.setnettyloggerfactory ();    Executorservice boss = Executors.newcachedthreadpool (New Namedthreadfactory ("Nettyserverboss", true));    Executorservice worker = Executors.newcachedthreadpool (New Namedthreadfactory ("Nettyserverworker", true)); ChannelFactory ChannelFactory = new Nioserversocketchannelfactory (boss, worker, GetUrl (). Getpositiveparameter (    Constants.io_threads_key, constants.default_io_threads));    Bootstrap = new Serverbootstrap (ChannelFactory);    Final Nettyhandler Nettyhandler = new Nettyhandler (GETURL (), this);    Channels = Nettyhandler.getchannels (); Bootstrap.setpipelinefactory (New Channelpipelinefactory () {public Channelpipeline getpipeline () {Netty            Codecadapter adapter = new Nettycodecadapter (Getcodec (), GetUrl (), nettyserver.this);            Channelpipeline pipeline = Channels.pipeline ();            Pipeline.addlast ("Decoder", Adapter.getdecoder ()); Pipeline.addlast ("EncoDer ", Adapter.getencoder ());            Pipeline.addlast ("handler", Nettyhandler);        return pipeline;    }    }); Bind channel = Bootstrap.bind (Getbindaddress ());}

The above code is Dubbo used to open the service, but also most of the use of Netty for server development commonly used to start the service side. The first is to set up the thread pool for the boss and worker so that they can be executed asynchronously within their own thread pool. When invoking bootstrap.bind(getBindAddress()) the method that finally accepts the binding operation NioServerSocketPipelineSink eventSunk , look at the class name and method signature should know to handle the IO event. The method is eventSunk implemented as follows:

public void eventSunk(        ChannelPipeline pipeline, ChannelEvent e) throws Exception {    Channel channel = e.getChannel();    if (channel instanceof NioServerSocketChannel) {        handleServerSocket(e);    } else if (channel instanceof NioSocketChannel) {        handleAcceptedSocket(e);    }}

Because this time the server is still in bind phase, so the channel is definitely not NioSocketChannel , so it is in the method handleServerSocket , and finally will call the bind method to bind a port to start the service. Here is the bind method implementation:

  private void bind (Nioserversocketchannel channel, Channelfuture future, SocketAddress Localaddre    SS) {Boolean bound = false;    Boolean bossstarted = false;        try {channel.socket.socket (). Bind (LocalAddress, Channel.getconfig (). Getbacklog ());        bound = true;        Future.setsuccess ();        Firechannelbound (channel, channel.getlocaladdress ());        Executor Bossexecutor = ((nioserversocketchannelfactory) channel.getfactory ()). Bossexecutor;                        Deadlockproofworker.start (Bossexecutor, New threadrenamingrunnable (        New Boss (channel), "New I/O server Boss #" + ID + "(" + Channel + ') ");    Bossstarted = true;        } catch (Throwable t) {future.setfailure (t);    Fireexceptioncaught (channel, T);        } finally {if (!bossstarted && bound) {Close (channel, future); }    }}

You can see the binding of the socket and set the asynchronous future success, notify the service to start successfully, and notify the binding success event. Next I look at the point, is Bossexecutor, can see it is through the NioServerSocketChannelFactory inside to obtain, NioServerSocketChannelFactory inside the boss is before we set in, you can be sure we set the boss of the asynchronous thread pool is used here. Tight next is to start our asynchronous thread pool, into the boss to do the thing, boss is actually realized the Runnable interface, which can be handed to the boss of the thread pool run, the next focus is the Boss run method, here is the boss to do things. Let's look at what boss initialization does before you do it:

Boss(NioServerSocketChannel channel) throws IOException {        this.channel = channel;        selector = Selector.open();        boolean registered = false;        try {            channel.socket.register(selector, SelectionKey.OP_ACCEPT);            registered = true;        } finally {            if (!registered) {                closeSelector();            }        }        channel.selector = selector;    }

The boss initialization process is actually to register serversocket into a selector, so that NIO asynchronous IO processing can be realized.

public void Run () {final Thread CurrentThread = Thread.CurrentThread ();        Channel.shutdownLock.lock (); try {for (;;) {try {if (Selector.select () > 0) {Selector.selectedkeys                    (). Clear ();                    } Socketchannel Acceptedsocket = Channel.socket.accept ();                    if (acceptedsocket! = null) {Registeracceptedchannel (Acceptedsocket, CurrentThread); }} catch (Sockettimeoutexception e) {//thrown every second to get Closedchanne                Lexception//raised.                } catch (Cancelledkeyexception e) {//raised by accept () when the server socket is closed.                } catch (Closedselectorexception e) {//raised by accept () when the server socket is closed.           } catch (Closedchannelexception e) {         Closed as requested.                Break                    } catch (Throwable e) {Logger.warn ("Failed to accept a connection.", e);                    try {thread.sleep (1000);        } catch (Interruptedexception E1) {//Ignore}}}            } finally {Channel.shutdownLock.unlock ();        Closeselector (); }    }

The

In the run method is a dead loop that waits for the client to connect uninterrupted, and if there is a client connection, the method Registeracceptedchannel is called for subsequent processing.

  private void Registeracceptedchannel (Socketchannel acceptedsocket, Thread currentthread) {try {            Channelpipeline pipeline = Channel.getconfig (). Getpipelinefactory (). Getpipeline ();            Nioworker worker = Nextworker ();                    Worker.register (New Nioacceptedsocketchannel (Channel.getfactory (), Pipeline, channel,        Nioserversocketpipelinesink.this, Acceptedsocket, worker, CurrentThread), NULL);            } catch (Exception e) {Logger.warn ("Failed to initialize an accepted socket.", e);            try {acceptedsocket.close (); } catch (IOException E2) {Logger.warn ("Failed to close a partially accepted socket            . ", E2); }        }    }

The method registerAcceptedChannel is to assign the client's Channle to a worker, which is obtained by means of a method nextWorker

NioWorker nextWorker() {    return workers[Math.abs(            workerIndex.getAndIncrement() % workers.length)];}

You can see that the method nextWorker is a function of the client channel in the worker to maintain the balance of the role, you may wonder where this workers, in fact, in the initialization NioServerSocketChannelFactory of the above, and NioServerSocketChannelFactory then to initialize the NioServerSocketPipelineSink time of construction, By default, the number of workers is set in NioServerSocketChannelFactory by our initialization. You can see that the method that invokes the worker register registers the client's channel with the worker.

  void Register (Niosocketchannel channel, Channelfuture Future) {Boolean Server =! (    Channel instanceof Nioclientsocketchannel);    Runnable registertask = new Registertask (channel, future, server);    Selector Selector; Synchronized (Startstoplock) {if (!started) {... this.selector = selector = Selector.           Open ();                .....                Deadlockproofworker.start (Executor, new threadrenamingrunnable (this, threadname));          Success = true; .....        }        else {selector = This.selector;        } assert Selector! = null && selector.isopen ();        started = true;        Boolean offered = Registertaskqueue.offer (registertask);    Assert offered;    } if (Wakenup.compareandset (False, True)) {selector.wakeup (); }}

Above the worker has a started state detection, if not started, the start worker, this amount is generally the first client's channel registered to the worker. Since the worker also implements the Rannable interface, the main thing to start is to have the worker run in a thread and assign a selector to the worker to monitor the IO event. Here's how this process is implemented:

  DeadLockProofWorker.start(                        executor, new ThreadRenamingRunnable(this, threadName));                success = true;

One of them is the executor workerexecutor we set up at the beginning.
After the worker starts successfully, the next thing to do is to have the channel of the Worker manager client

 Runnable registerTask = new RegisterTask(channel, future, server);    ....... boolean offered = registerTaskQueue.offer(registerTask);        assert offered;

The worker is wrapping the client into a registertask and then putting it into the queue, and the visible Registertask also implements the Runnable interface. Who is going to fetch the data from this queue after it is put in the queue? Of course, it must be a worker to fetch. The boot worker is described above as having the worker run in a thread, and the worker is implementing the Rannable method, so the thread running the worker must be the call worker's Run method.

  public void Run () {thread = Thread.CurrentThread ();    Boolean shutdown = false;    Selector Selector = This.selector; for (;;)            {.... try {selectorutil.select (selector);            ..... Cancelledkeys = 0;            Processregistertaskqueue ();            Processwritetaskqueue ();             Processselectedkeys (Selector.selectedkeys ()); .....        }            catch (Throwable t) {try {thread.sleep (1000); } catch (Interruptedexception e) {}}}}  

You can see that the run method inside is also a dead loop, in which the constant polling invokes the event of selector's Select IO. Next, three methods are called processRegisterTaskQueue , processWriteTaskQueue and processSelectedKeys . Through the method signature should know this three method is exactly what to do, the first is to deal with the above registerTaskQueue , and the queue inside the method of the object, run and the second processWriteTaskQueue is to handle the write task, and processSelectedKeys is to handle the selector matching IO event. Let's see registerTaskQueue what we did first.

 private void processRegisterTaskQueue() throws IOException {    for (;;) {        final Runnable task = registerTaskQueue.poll();        if (task == null) {            break;        }        task.run();        cleanUpCancelledKeys();    }}

registerTaskQueuethe elements that are described above are RegisterTask . So you need to look at RegisterTask the Run method implementation, which RegisterTask is NioWorker inside the inner class, so RegisterTask is NioWorker the element information that can be accessed.

 public void Run () {socketaddress localaddress = channel.getlocaladdress ();        SocketAddress remoteaddress = channel.getremoteaddress ();  if (localaddress = = NULL | | remoteaddress = NULL) {if (future! = null) {future.setfailure (new            Closedchannelexception ());            } close (channel, Succeededfuture (channel));        Return            try {if (server) {channel.socket.configureBlocking (false); } synchronized (Channel.interestopslock) {Channel.socket.register (Selec            Tor, Channel.getrawinterestops (), channel);                } if (future! = null) {channel.setconnected ();            Future.setsuccess ();            }} catch (IOException e) {if (future! = null) {future.setfailure (e);           } close (channel, Succeededfuture (channel));   ....        }     if (!server) {if (!) (            (Nioclientsocketchannel) channel). boundmanually) {Firechannelbound (channel, localaddress);        } firechannelconnected (channel, remoteaddress); }    }

As you can see, the main thing to do here is to assign the boss to the client channel of the worker and the selector Association of the worker so that the worker can handle the IO event of the client channel.

Here is the completion of a client connection received by the boss, to assign to a worker, and how the worker is associated with the client's channel, in which the worker is likely to serve multiple client channel, So the worker does not directly produce a reference to a channel, but registers the channel of the client with the worker's selector, and the worker's Run method is polled through constant selector select. To achieve the processing of the channel. Let's see what the worker does with the selector IO event.

  <!--java:lang-->private void Processselectedkeys (set<selectionkey> selectedkeys) throws IOException {for (iterator<selectionkey> i = Selectedkeys.iterator (); I.hasnext ();)        {Selectionkey k = I.next ();        I.remove ();            try {int readyops = K.readyops (); if (Readyops & selectionkey.op_read)! = 0 | | readyops = 0) {if (!read (k)) {//Co                    Nnection already closed-no need to handle write.                Continue            }} if ((Readyops & selectionkey.op_write)! = 0) {writefromselectorloop (k);        }} catch (Cancelledkeyexception e) {close (k);        } if (Cleanupcancelledkeys ()) {break;//Break the loop to avoid concurrentmodificationexception }    }}

The above method completes the processing of the IO event generated by the selector, where if the current IO time is read, the channel stream in the Selectionkey is read out and forwarded to Netty handler. If it is a write-satisfied condition for the current channel, the view is triggered to writeFromSelectorLoop see if the content needs to be written out.

For write Data Netty provides three types of entry in the worker

void writeFromUserCode(final NioSocketChannel channel) {    if (!channel.isConnected()) {        cleanUpWriteBuffer(channel);        return;    }    if (scheduleWriteIfNecessary(channel)) {        return;    }    if (channel.writeSuspended) {        return;    }    if (channel.inWriteNowLoop) {        return;    }    write0(channel);}void writeFromTaskLoop(final NioSocketChannel ch) {    if (!ch.writeSuspended) {        write0(ch);    }}void writeFromSelectorLoop(final SelectionKey k) {    NioSocketChannel ch = (NioSocketChannel) k.attachment();    ch.writeSuspended = false;    write0(ch);}

writeFromUserCodeThis is provided externally, and writeFromTaskLoop is triggered when the worker's Run method is called processWriteTaskQueue .

Boss and worker "server chapter" in Netty

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.