Netty channelpipeline detailed analysis of stream processing source code

Source: Internet
Author: User

The API on the netty official website provides some examples and images when introducing pipeline processing.

The order of upstreamhandle and downstreamhandler stream processing has been proved.

Looking at the examples and conclusions, it is hard to understand the truth. It is better to do it yourself and debug it.

.

The following is an example

Public class server {public static void main (string ARGs []) {serverbootstrap boot13c = new serverbootstrap (New nioserversocketchannelfactory (executors. newcachedthreadpool (), executors. newcachedthreadpool (); boot13c. setpipelinefactory (New pipelinefactorytest (); boot13c. BIND (New inetsocketaddress (8888);} public class pipelinefactorytest implements channelpipelinefactory {@ overridepublic channelpipeline getpipeline () throws exception {channelpipeline pipeline = channels. pipeline (); pipeline. addlast ("1", new upstreamhandlera (); pipeline. addlast ("2", new upstreamhandlerb (); pipeline. addlast ("3", new downstreamhandlera (); pipeline. addlast ("4", new downstreamhandlerb (); pipeline. addlast ("5", new upstreamhandlerx (); Return pipeline;} public class upstreamhandlera extends simplechannelupstreamhandler {@ overridepublic void messagereceived (channelhandlercontext CTX, messageevent E) throws exception {channel ctxchannel = CTX. getchannel (); Channel echannel = E. getchannel (); system. out. println (ctxchannel. equals (echannel); // handle and event share a channel
System. out. println ("upstreamhandlera. messagereceived: "+ E. getmessage (); CTX. sendupstream (e) ;}@ overridepublic void exceptioncaught (channelhandlercontext CTX, exceptionevent e) {system. out. println ("upstreamhandlera. predictioncaught: "+ E. tostring (); E. getchannel (). close ();} public class upstreamhandlerb extends simplechannelupstreamhandler {@ overridepublic void messagereceived (channelhandlercontext CTX, messageevent e) throws exception {system. out. println ("upstreamhandlerb. messagereceived: "+ E. getmessage (); CTX. sendupstream (e) ;}@ overridepublic void exceptioncaught (channelhandlercontext CTX, exceptionevent e) {system. out. println ("upstreamhandlerb. predictioncaught: "+ E. tostring (); E. getchannel (). close () ;}} public class upstreamhandlerx extends simplechannelupstreamhandler {@ overridepublic void messagereceived (channelhandlercontext CTX, messageevent e) throws exception {system. out. println ("upstreamhandlerx. messagereceived: "+ E. getmessage (); E. getchannel (). write (E. getmessage () ;}@ overridepublic void exceptioncaught (channelhandlercontext CTX, exceptionevent e) {system. out. println ("upstreamhandlerx. predictioncaught "); E. getchannel (). close () ;}} public class downstreamhandlera extends simplechanneldownstreamhandler {public void handledownstream (channelhandlercontext CTX, channelevent e) throws exception {system. out. println ("downstreamhandlera. handledownstream "); super. handledownstream (CTX, e) ;}} public class downstreamhandlerb extends simplechanneldownstreamhandler {public void handledownstream (channelhandlercontext CTX, channelevent e) throws exception {system. out. println ("downstreamhandlerb. handledownstream: "); super. handledownstream (CTX, e );}}

Client:



public class AppStoreClinetBootstrap {public static void main(String args[]){ExecutorService bossExecutor = Executors.newCachedThreadPool();ExecutorService workerExecutor = Executors.newCachedThreadPool();ChannelFactory channelFactory = new NioClientSocketChannelFactory(bossExecutor, workerExecutor);ClientBootstrap bootstarp = new ClientBootstrap(channelFactory);bootstarp.setPipelineFactory(new AppClientChannelPipelineFactory());ChannelFuture future = bootstarp.connect(new InetSocketAddress("localhost", 8888));future.awaitUninterruptibly();if(future.isSuccess()){String msg = "hello word";ChannelBuffer buffer = ChannelBuffers.buffer(msg.length());buffer.writeBytes(msg.getBytes());future.getChannel().write(buffer);}}}

Public class appclientchannelpipelinefactory implements channelpipelinefactory {

Public channelpipeline getpipeline () throws exception {

Channelpipeline pipeline = pipeline ();

// Pipeline. addlast ("encode", new stringencoder ());

Pipeline. addlast ("handler", new appstoreclienthandler (); Return pipeline;

}

}

Public class appstoreclienthandler extends simplechannelupstreamhandler {

Private Static logger log = logger. getlogger (appstoreclienthandler. Class );

@ Override

Public void messagereceived (channelhandlercontext CTX, messageevent e) throws exception {

}

@ Override

Public void exceptioncaught (channelhandlercontext CTX, exceptionevent e) throws exception {

// Todo auto-generated method stub super. exceptioncaught (CTX, e );

}

}

  

The above example proves. The propagation sequence of updatestream and downstream.

Upstream: 1-> 2-> 5 sequential processing
Downstream: 4-> 3 reverse Processing

========================================================== ======================

Now, stop. Why?

 

On the servers end, after bind,

If the client has no request, the servers end will remain in the loop state. Activation starts until a new client connection exists.

Code such

 

Nioserversocketpipelinesinkle class ...................... public void run () {final thread currentthread = thread. currentthread (); channel. shutdownlock. lock (); try {for (;) {try {If (selector. select (1000)> 0) {selector. selectedkeys (). clear ();} // after starting servers, if the clent does not have a request, this loop will continue.
Socketchannel acceptedsocket = channel. Socket. Accept (); If (acceptedsocket! = NULL) {registeracceptedchannel (acceptedsocket, currentthread );}.................................... ..

Register after a client request is activated. The following code is used:

 

 private void registerAcceptedChannel(SocketChannel acceptedSocket, Thread currentThread) {            try {                ChannelPipeline pipeline =channel.getConfig().getPipelineFactory().getPipeline();                NioWorker worker = nextWorker();                worker.register(new NioAcceptedSocketChannel(                        channel.getFactory(), pipeline, channel,                        NioServerSocketPipelineSink.this, acceptedSocket,                        worker, currentThread), null);            } catch (Exception e) {                logger.warn(                        "Failed to initialize an accepted socket.", e);                try {                    acceptedSocket.close();                } catch (IOException e2) {                    logger.warn(                            "Failed to close a partially accepted socket.",                            e2);                }            }        }

The red part. Get all the handle of pipeline, that is, channelpipeline in the pipelinefactorytest class, which is processed by niowork.

Focus on the pipeline. addlast Method

 

    public synchronized void addLast(String name, ChannelHandler handler) {        if (name2ctx.isEmpty()) {            init(name, handler);        } else {            checkDuplicateName(name);            DefaultChannelHandlerContext oldTail = tail;            DefaultChannelHandlerContext newTail = new DefaultChannelHandlerContext(oldTail, null, name, handler);            callBeforeAdd(newTail);            oldTail.next = newTail;            tail = newTail;            name2ctx.put(name, newTail);            callAfterAdd(newTail);        }    }

Defaultchannelhandlercontext is the linked list structure. It is used to store various upstreamhandler and downstreamhandle through next and Prev (this is the focus ),

Because upstream is specially responsible for receiving data, when the client has data requests, upstreamhandle in the pipelinefactorytest class is sequentially transmitted.

The following code illustrates why it is Sequential transmission. If you are careful, you can see that there is one in the three upstreamhandle pipelinefactorytest

CTX. sendupstream (e); (channelhandlercontext is the context of various handler)

In this method, the previous upstreamhandler is responsible for passing the event to the next upstreamhandler (typical mode of responsibility chain)

The Code is as follows:

Public void sendupstream (channelevent e ){
Defaultchannelhandlercontext next = getactualupstreamcontext (this. Next );
If (next! = NULL ){
Defaultchannelpipeline. This. sendupstream (next, e); // The next upstreamhandle is triggered immediately.
}
} Defachanchannelhandlercontext getactualupstreamcontext (defachanchannelhandlercontext CTX) {If (CTX = NULL) {return NULL;} defachannelhandlercontext realctx = CTX; while (! Realctx. canhandleupstream () {realctx = realctx. Next; If (realctx = NULL) {return NULL ;}} return realctx ;}

As mentioned above, defachanchannelhandlercontext stores a lot of handler in the linked list structure, so all upstreamhandle is retrieved here. Then, the event is transmitted.

Because all upstreamhandle share an event, they also share a channelbuffer. This mode is similar to the responsibility chain, and can also be used for filter processing.

It is easy to understand the various encode (downstreamhandle) and decode (upstreamhandle) in netty ).

Likewise, downstreamhandle Analysis

The upstreamhandlerx class has an E. getchannel (). Write (E. getmessage () method. A downstreammessageevent is triggered to find the corresponding downstreamhandlera.

Downa-downb is transmitted through Super. handledownstream (CTX, e.

 

 public ChannelFuture write(Object message) {        return Channels.write(this, message);    } public static ChannelFuture write(Channel channel, Object message, SocketAddress remoteAddress) {        ChannelFuture future = future(channel);        channel.getPipeline().sendDownstream(                new DownstreamMessageEvent(channel, future, message, remoteAddress));        return future;    }

  

 

The thinking of writing this blog is too great. After a day of exploration, we had a lot of GAINS. Record

 

 

  

 

 

 

 

  

 

  

 

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.