Reproduced Netty Source Code Analysis

Source: Internet
Author: User

Reprinted from http://blog.csdn.net/kobejayandy/article/details/11836813

Netty provides asynchronous, event-driven network application frameworks and tools for rapid development of high-performance, high-reliability network servers and client programs [official definition], with the following in the overall context:

1. Provides rich protocol codec support

2. Implement your own buffer system to reduce the cost of replication

3. Implementation of the entire channel

4. Event-based process flow and complete network event response and expansion

5. Rich example.

This article does not analyze the problems that may arise in the actual use of Netty, but analyzes its architecture and some key details of its implementation from a code perspective.

First look at the most how to use Netty (its own example very good to demonstrate the use of), Netty general use is generally through the bootstrap to start, bootstrap mainly divided into two categories: 1. Connection-oriented (TCP) bootstrap ( Clientbootstrap and Serverbootstrap), 2. Non-connection-oriented (UDP) (Connectionlessbootstrap).

Netty The overall architecture is clearly divided into 2 parts, ChannelFactory and Channelpipelinefactory, the former mainly produces network communication-related channel instances and Channelsink instances, Netty provides ChannelFactory implementation is basically able to meet the needs of the vast majority of users, of course, you can also customize your own ChannelFactory, the latter focus on the specific transfer of data processing, but also include other aspects of the content, such as exception handling, as long as you want, You can add the corresponding handler, the general channelpipelinefactory by the user's own implementation, because the transfer of data processing and other operations and business associations are relatively close, need to customize the processing of handler.

Now, the steps to use Netty are actually very clear, such as for connection-oriented Netty Server client use,

First step: Instantiate a bootstrap and specify a ChannelFactory implementation by constructing the method

Step Two: Register a channelpipelinefactory of your own implementation to the bootstrap instance,

The third step: If the server side, Bootstrap.bind (new inetsocketaddress (port)), and then wait for the client to connect, if it is the client, Bootstrap.connect (new Inetsocketaddress (Host,port)) to get a future, this time Netty will go to connect to the remote host, after the connection is completed, will be launched channelstateevent type connected, and begin to circulate in your custom pipeline, if your registered handler has the response method of the event, then this method will be called. After this, the data is transferred.

The following is a code interpretation of a simple client.
[Java]
Instantiates a client bootstrap instance where the Nioclientsocketchannelfactory instance is provided by Netty
Clientbootstrap bootstrap = new Clientbootstrap (
New Nioclientsocketchannelfactory (
Executors.newcachedthreadpool (),
Executors.newcachedthreadpool ()));

Set Pipelinefactory, implemented by the client itself
Bootstrap.setpipelinefactory (New Factorialclientpipelinefactory (count));

Initiate a connection to the destination address
Channelfuture connectfuture =
Bootstrap.connect (New Inetsocketaddress (host, port));

Waiting for the link to succeed, the connected event initiated after successful will cause handler to start sending messages and wait for messagerecive, which is just an example.
Channel channel = connectfuture.awaituninterruptibly (). Getchannel ();

Get user-definable handler
Factorialclienthandler handler =
(Factorialclienthandler) Channel.getpipeline (). GetLast ();

Taking data from handler and printing, it is important to note that Handler.getfactorial used the blocking method from the result queue, result take data, The resulting queue is populated with the received data when the Messagerecieve event occurs
System.err.format (
"Factorial of%,d is:%,d", Count, Handler.getfactorial ());
[/java]
Netty provides both NiO and bio (OIO) modes for processing these logic, where NIO mainly handles waiting for link access through a boss thread, and several worker threads (picking one from the worker thread pool to assign to the channel instance, Because the channel instance holds a true Java network object) takes over the channel that the boss thread has submitted for data reading and writing and triggers the corresponding event to be passed to pipeline for processing, while bio (OIO) Mode server side Although still through a boss thread to handle waiting for link access, but the client is directly connected by the main thread, in addition to write data C/s both ends are direct main thread write, and the data read operation is a worker thread block way read (wait until the data is read , unless the channel is closed).

Network action boils down to the simplest is server-side bind->accept->read->write, client connect->read->write, general bind or connect will have multiple read, Write This feature leads to the separation of bind,accept and Read,write threads, and the separation of connect from the read and write threads, the benefit of which is that both the server side and the client throughput will increase effectively to take advantage of the machine's processing power. Instead of being stuck on a network connection, but once the machine processing power is fully utilized, this approach may outweigh the loss of performance due to too frequent thread switching, and the complexity of this processing model is higher.

What kind of network event response processing mechanism is important for network throughput, Netty uses the standard Seda (staged event-driven Architecture) architecture [http://en.wikipedia.org/wiki/ Staged_event-driven_architecture], the type of event it is designed to represent the various stages of network interaction and, at each stage, triggers the corresponding event to be processed by the pipeline instance generated at initialization. Event handling begins with the static method invocation of the channels class, passing the event, the channel to the pipeline that the channel holds, and the channels class almost all methods are static, providing a proxy effect ( The entire project can call its static method to trigger a fixed event flow whenever and wherever it is, but it is not concerned with the specific process.

Channels partial Event Flow static method
1. Firechannelopen 2. Firechannelbound 3. Firechannelconnected 4. Firemessagereceived 5. Firewritecomplete 6. Firechannelinterestchanged
7. Firechanneldisconnected 8. Firechannelunbound 9. firechannelclosed 10.fireExceptionCaught 11.fireChildChannelStateChanged

Netty provides a comprehensive and rich network event type that divides network events in Java into two types of upstream and downstream. In general, events of the upstream type are primarily fed back to Netty, such as messagereceived,channelconnected events, and downstream types of events are initiated by the framework itself, such as Bind, Write,connect,close and other events.

Netty's upstream and downstream network event type features also allow a handler to be divided into 3 types, specifically dealing with upstream, specifically dealing with downstream, while processing upstream,downstream. The implementation is a specific handler by inheriting the Channelupstreamhandler and Channeldownstreamhandler classes to differentiate. Pipeline when a network event of type downstream or upstream occurs, the handler response of the matching event type is called. Channelpipeline maintains all handler ordered lists, and the handler itself controls whether the flow continues to the next handler (Ctx.senddownstream (e), so the design has the advantage of terminating the flow at any time, Business purpose achieved without the need to continue to flow to the next handler). The following code is the processor that gets the next processing downstream event.
[Java]
Defaultchannelhandlercontext realctx = CTX;
while (!realctx.canhandleupstream ()) {
Realctx = Realctx.next;
if (Realctx = = null) {
return null;
}
}

Return realctx;
[/java]
If it is an event at the end of a network session, such as Messagerecieve, it is possible to end the entire session directly in one of the handler and give the data to the upper application, but if it is a midway event in a network session, For example, the Connect event, when the Connect event is triggered, passes through the pipeline, and eventually arrives at the bottom Channelsink instance of the Mount pipeline, which is the primary function of sending requests and receiving requests, as well as reading and writing data.

NiO mode Channelsink typically has 1 boss instances (implements Runnable), and several worker instances (not set to CPU CORES*2 workers by default), which has been mentioned earlier, The boss thread is not the same as the channelsink of the client type and the Channelsink trigger for the server-side type, the client-type boss thread is started when the Connect event occurs, the main listener is successful, and if successful, A worker thread will be started, and the connected channel will be handed over to the thread to continue the work below, while the server-side boss thread is started on the bind event, and its work is relatively straightforward for channel.socket (). Accept ( ) incoming requests are assigned to the Nioworker. It should be mentioned here that the server-side Channelsink implementation is more special, whether it is Nioserversocketpipelinesink or oioserversocketpipelinesink the Eventsunk method implementation of the channel is divided into Serversocketchannel and socketchannel separate processing. The main reason is that the boss thread accept () A new connection generates a socketchannel to the worker for data reception.
[Java]
public void Eventsunk (
Channelpipeline pipeline, Channelevent e) throws Exception {
Channel channel = E.getchannel ();
if (channel instanceof Nioserversocketchannel) {
Handleserversocket (e);
} else if (channel instanceof Niosocketchannel) {
Handleacceptedsocket (e);
}
}

Nioworker worker = Nextworker ();
Worker.register (New Nioacceptedsocketchannel (
Channel.getfactory (), Pipeline, channel,
Nioserversocketpipelinesink.this, Acceptedsocket,
Worker, CurrentThread), NULL);
[/java]
In addition, both instances will go through the following processes:
[Java]
Setconnected ();
Firechannelopen (this);
Firechannelbound (this, getlocaladdress ());
Firechannelconnected (this, getremoteaddress ());
[/java]
and the corresponding channelsink inside the processing code is different from the Serversocketchannel, because the Walk is Handleacceptedsocket (e) This piece of code, from the default implementation code, the instantiation of the call Firechannelopen (this); Firechannelbound (This,getlocaladdress ()); firechannelconnected (This, Getremoteaddress ()) does not make sense, but it has a special meaning for the channelsink of its own realization. I did not understand the specific use, but it allows the user to intervene in the process of the server accept connection to prepare to read and write data.
[Java]
Switch (state) {
Case OPEN:
if (Boolean.FALSE.equals (value)) {
Channel.worker.close (channel, future);
}
Break
Case BOUND:
Case CONNECTED:
if (value = = null) {
Channel.worker.close (channel, future);
}
Break
Case Interest_ops:
Channel.worker.setInterestOps (channel, Future, ((Integer) value). Intvalue ());
Break
}
[/java]
Netty provides a large number of handler to handle network data, but most of them are codec related in order to support multiple protocols, and the following diagram plots the handlers provided at this stage Netty (the red part is incomplete)

Netty implementation of the package implementation of their own set of Bytebuffer system, the Bytebuffer system is a unified interface is Channelbuffer, this interface from the overall definition of two types of methods, one is similar to getxxx (int index ...), setxxx (int index ...) You need to specify the starting position to start the buffer, the simple point is the direct operation of the underlying buffer, does not use the Netty-specific high reusability of the buffer feature, so netty internal for such method calls very little, and the other is similar to readxxx (), WriteXXX () does not need to specify the location of the buffer operation, such a method implementation is placed in the Abstractchannelbuffer, the main feature is to maintain the buffer location information, including Readerindex,writerindex, As well as backtracking markedreaderindex and Markedwriterindex, when the user calls the Readxxx () or Writexxx () method, Abstractchannelbuffer is based on the maintenance Readerindex,writerindex calculates the read position and then calls the getxxx (int index ...) that inherits its channelbuffer. or setxxx (int index ...) The method returns the result, which is called extensively within Netty because the greatest benefit of this feature is that it is easy to reuse the buffer without having to bother to maintain index or create a large number of bytebuffer.

In addition, the Wrappedchannelbuffer interface provides a proxy for Channelbuffer, his use is to reuse the underlying buffer, but will convert some of the role of buffer, such as the original read and write can be, Wrap into Readonlychannelbuffer, then the entire buffer can only use the readxxx () or GetXXX () method, that is, read-only, and then the underlying buffer or the original, Again, as a read-write Channelbuffer is wrap into Truncatedchannelbuffer, then the new buffer will ignore the data in the buffer of wrap, and can specify a new Writeindex, Equivalent to the slice function.

Netty realized its own set of complete channel systems, and the channel said it was also a layer of encapsulation of the Java network, coupled with the Seda feature (based on event response, async, multithreading, etc.). Its ultimate network communication relies on the underlying Java network API. Referring to Asynchrony, we have to mention Netty's future system, from the definition of channel, Write,bind,connect,disconnect,unbind,close, Even methods, including Setinterestops, return a channelfuture, which triggers related network events and flows through pipeline. Channel Many method calls are basically not executed immediately to the lowest level, but trigger the event, walk in the pipeline, and finally do the related operations in the Channelsink, if the network operation is involved, then the final call will go back to the channel, that is Serversocketchannel,socketchannel,serversocket,socket and other Java native Network API calls, and these instances are the ones that are held by the JBoss implemented channel (some channel).

The new version of Netty has a feature zero-copy that allows the file content to be transmitted directly to the corresponding channel without the need for CPU involvement, and a memory copy is reduced. Netty internal chunkedfile and fileregion constitute the non zero-copy and zero-copy two forms of file content transmission mechanism, the former requires CPU participation, the latter depending on whether the operating system supports zero-copy transfer files to a specific The channel, if supported by the operating system, does not require CPU involvement and thus a memory copy is reduced. Chunkedfile mainly uses APIs such as file read,readfully, and fileregion Transferto api,2 using FileChannel is not complicated. Zero-copy's characteristics still depends on the operating system, the code itself is not much special.

Finally, Netty's architectural ideas and details can be said to make people bright, for the Java Network IO Points of attention, it can be said that Netty has been solved relatively completely, while Netty's author is another NIO framework Mina author, in the actual use of accumulated rich experience, But this article is just a beginner's initial understanding of Netty, and not enough ability to point out the role of a particular detail.

Reproduced Netty Source Code Analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.