The basic network model as well as IO and NiO are described earlier, so do we have NIO to develop non-blocking servers? With technical support, we go back to the pursuit of efficiency, so there is a lot of NIO's framework to encapsulate NIO-this is the famous Netty.
The previous few contents, can refer to:
- Basic knowledge and concepts of network IO
- General IO and Bio server
- Use of NiO with server Hello World
- Use of Netty with server Hello World
Why use an open source framework?
This question is almost nonsense, and the framework is certainly more functional than some native APIs, and it is not advisable to reinvent the wheel in pursuit of efficiency. So, first of all, what are the drawbacks of NiO:
- NIO's class libraries and APIs are a bit more complex, such as the use of buffer
- Selector Write complex, if the business code is too coupled after registering an event
- Need to know a lot of multithreading knowledge, familiar with network programming
- In the face of the disconnection, loss of protection, adhesive packets, etc., to deal with complex
- NiO has a bug, according to online speech is selector empty rotation caused the CPU soared, specifically interested can look at the JDK's official website
With these problems, it is urgent for some of them to develop a universal framework to facilitate working class. The most deadly NIO framework is Mina and Netty, which has to be said in a small episode:
First look at MINA 's main contributors:
Take a look at Netyy 's main contributors:
Summing up, there are a few points:
- The main contributors to Mina and Netty are the same individuals--trustin Lee, South Korea Line Corp.
- Mina in 2006, to 14, 15 years or so, the basic stop maintenance
- Nety started in 2009 and is still being maintained by Apple's Norman Maurer in the main.
- Norman Maurer is the author of the book "Netty in Action"
So if you choose, you should know who to choose. In addition, Mina to the underlying system requirements deeper, and the domestic netty atmosphere is better, there are Li Lin and other people in the propaganda ("Netty authoritative guide" author).
After talking a lot of crap, the conclusion is that--netty has a future, it is right to learn it.
Netty Introduction
By definition, Netty is an asynchronous, event-driven Network application framework for high performance and high reliability. The main advantages are:
- Elegant frame design, the underlying model arbitrarily switch to adapt to different network protocol requirements
- Provides a number of standard protocols, security, encoding and decoding support
- Solves a lot of the problems that NIO is not easy to use
- The community is more active and is used in many open source frameworks, such as Dubbo, ROCKETMQ, Spark, etc.
The main supported features or features are:
- The underlying core is: zero-copy-capable buffer, a very easy-to-use spirit copy buffer (this is interesting, later specifically); Unified API; Standard extensible time model
- Transmission support is: Pipeline communication (specifically do not know what to do, but also ask the old driver advice); HTTP tunneling; TCP and UDP
- Protocol support: Based on the original text and binary protocol, decompression, large file transfer, streaming media transmission, PROTOBUF codec, security authentication, HTTP and WebSocket
In short, there are a lot of ready-made features available for developers to use directly.
Netty Server Small Example
Netty-based server programming can be seen as a reactor model:
That is, a thread pool that contains a receive connection (or it could be a single thread, a boss thread pool), and a thread pool (worker thread pool) that handles the connection. The boss is responsible for receiving the connection and monitoring the IO, and the worker is responsible for the subsequent processing. To make it easier to understand Netty, look directly at the code:
Package Cn.xingoo.book.netty.chap04;import Io.netty.bootstrap.serverbootstrap;import Io.netty.buffer.ByteBuf; Import Io.netty.buffer.unpooled;import Io.netty.channel.*;import Io.netty.channel.nio.nioeventloopgroup;import Io.netty.channel.socket.socketchannel;import Io.netty.channel.socket.nio.nioserversocketchannel;import Java.net.inetsocketaddress;import Java.nio.charset.charset;public class Nettynioserver {public void serve (int port) th Rows interruptedexception {final Bytebuf buffer = Unpooled.unreleasablebuffer (Unpooled.copiedbuffer ("Hi\r\n", Char Set.forname ("UTF-8")); The first step is to create the thread pool Eventloopgroup bossgroup = new Nioeventloopgroup (1); Eventloopgroup Workergroup = new Nioeventloopgroup (); try{//Second step, create startup class Serverbootstrap B = new Serverbootstrap (); The third step is to configure each component B.group (Bossgroup, Workergroup). Channel (Nioserversocketchannel.class) . localaddress (New Inetsocketaddress (port)). Childhandler (New channelinitializer<socketchannel> () {@Override protected void Initchannel (Socketchannel socketchannel) throws Exception { Socketchannel.pipeline (). AddLast (New Channelinboundhandleradapter () {@Override public void Channelactive (Channelhandlercontext ctx) throws Exception { Ctx.writeandflush (Buffer.duplicate ()). AddListener (Channelfuturelistener.close); } }); } }); Fourth step, turn on the monitor channelfuture F = b.bind (). sync (); F.channel (). Closefuture (). sync (); } finally {bossgroup.shutdowngracefully (). sync (); Workergroup.shutdowngracefully (). sync (); }} public static void Main (string[] args) throws Interruptedexception {nettynioserver Server = new Nettynioserver (); Server.serve (5555); }}
The code is very small and you want to switch to block IO, just replace the factory class in the channel:
public class NettyOioServer { public void serve(int port) throws InterruptedException { final ByteBuf buf = Unpooled.unreleasableBuffer(Unpooled.copiedBuffer("Hi\r\b", Charset.forName("UTF-8"))); EventLoopGroup bossGroup = new OioEventLoopGroup(1); EventLoopGroup workerGroup = new OioEventLoopGroup(); try{ ServerBootstrap b = new ServerBootstrap(); b.group(bossGroup, workerGroup)//配置boss和worker .channel(OioServerSocketChannel.class) // 使用阻塞的SocketChannel ....
In summary, the following major components are included in the Netty:
- The Bootstrap:netty component container, which is used to connect the other parts, or serverbootstrap if it is the server side of TCP.
- Channel: A connection that represents a socket
- Eventloopgroup: A group contains multiple eventloop that can be understood as a thread pool
- EventLoop: Handling specific channel, a eventloop can handle multiple channel
- Channelpipeline: Each channel is bound to a pipeline, which registers the processing logic above handler
- Handler: Specific handling of messages or connections, there are two types, inbound and outbound. Represents the processing of message reception and the processing of message delivery, respectively.
- Channelfuture: Annotation callback method
Once you understand the basic components above, take a look at a few important things.
Buffer and 0 copies of the Netty
In Unix operating systems, the underlying system can implement kernel space and user space memory mappings based on Mmap. However, this is not the meaning in Netty, it mainly comes from the following features:
- The combination and splitting of the logical buffer is achieved through composite and slice, re-maintaining the index and avoiding the memory copy process.
- Use Directbuffer to request out-of-heap memory to avoid copy of user space. However, the application and release of out-of-heap memory is cumbersome and recommended for careful use. For some research on out-of-heap memory, you can also refer to the duty sharing: breaking out of Java heap memory JVM Yoke and Java Direct Memory and non-direct memory performance testing
- The transmission of channel to channel is realized directly through fileregion packaging FileChannel.
In addition, Netty's own package implements the BYTEBUF, which is easier to use than the NIO native Bytebuffer,api, supports dynamic capacity expansion, and also supports buffer pooling for efficient multiplexing of buffer.
public class ByteBufTest { public static void main(String[] args) { //创建bytebuf ByteBuf buf = Unpooled.copiedBuffer("hello".getBytes()); System.out.println(buf); // 读取一个字节 buf.readByte(); System.out.println(buf); // 读取一个字节 buf.readByte(); System.out.println(buf); // 丢弃无用数据 buf.discardReadBytes(); System.out.println(buf); // 清空 buf.clear(); System.out.println(buf); // 写入 buf.writeBytes("123".getBytes()); System.out.println(buf); buf.markReaderIndex(); System.out.println("mark:"+buf); buf.readByte(); buf.readByte(); System.out.println("read:"+buf); buf.resetReaderIndex(); System.out.println("reset:"+buf); }}
The output is:
UnpooledHeapByteBuf(ridx: 0, widx: 5, cap: 5/5)UnpooledHeapByteBuf(ridx: 1, widx: 5, cap: 5/5)UnpooledHeapByteBuf(ridx: 2, widx: 5, cap: 5/5)UnpooledHeapByteBuf(ridx: 0, widx: 3, cap: 5/5)UnpooledHeapByteBuf(ridx: 0, widx: 0, cap: 5/5)UnpooledHeapByteBuf(ridx: 0, widx: 3, cap: 5/5)mark:UnpooledHeapByteBuf(ridx: 0, widx: 3, cap: 5/5)read:UnpooledHeapByteBuf(ridx: 2, widx: 3, cap: 5/5)reset:UnpooledHeapByteBuf(ridx: 0, widx: 3, cap: 5/5)
Interested can look at the previous share of the bytebuffer, compared, you can find in the Netty through the independent reading and writing index maintenance, to avoid the switching of read-write mode, more convenient.
Use of Handler
Described earlier, handler contains two types of inbound and outbound, and they are uniformly placed in a doubly linked list:
When the message is received, it is traversed from the table header of the linked list, and if it is inbound, the corresponding method is called, and if the message is sent, it is traversed from the tail of the linked list. Then the example on the way up, the receiving message will be output:
InboundA --> InboundB --> InboundC
Output message, the output is:
OutboundC --> OutboundB --> OutboundA
Here is a piece of code that can be copied directly, try it:
Package Cn.xingoo.book.netty.pipeline;import Io.netty.bootstrap.serverbootstrap;import Io.netty.buffer.ByteBuf; Import Io.netty.buffer.unpooled;import Io.netty.channel.*;import Io.netty.channel.nio.nioeventloopgroup;import Io.netty.channel.socket.socketchannel;import Io.netty.channel.socket.nio.nioserversocketchannel;import Java.net.inetsocketaddress;import java.net.socketaddress;import java.nio.charset.charset;/** * NOTE: * * 1 Channeloutboundhandler to before the last inbound * */public class Nettynioserverhandlertest {final static bytebuf buffer = Unpool Ed.unreleasablebuffer (Unpooled.copiedbuffer ("hi\r\n", Charset.forname ("UTF-8")); public void serve (int port) throws Interruptedexception {Eventloopgroup bossgroup = new Nioeventloopgroup (1); Eventloopgroup Workergroup = new Nioeventloopgroup (); try{Serverbootstrap B = new Serverbootstrap (); B.group (Bossgroup, Workergroup). Channel (Nioserversocketchannel.class) . localaddress (New inetsocketaddress (port)). Childhandler (New channelinitializer<socketchannel> ( ) {@Override protected void Initchannel (Socketchannel socketchannel) throws Exception {Channelpipeline pipeline = Socketchannel.pipeline (); Pipeline.addlast ("1", New Inbounda ()); Pipeline.addlast ("2", New Outbounda ()); Pipeline.addlast ("3", New Inboundb ()); Pipeline.addlast ("4", New Outboundb ()); Pipeline.addlast ("5", New Outboundc ()); Pipeline.addlast ("6", New Inboundc ()); } }); Channelfuture f = b.bind (). sync (); F.channel (). Closefuture (). sync (); } finally {bossgroup.shutdowngracefully (). sync (); Workergroup.shutdowngracefully (). sync (); } public static void Main (string[] args) throws interruptedexception {nettynioserverhandlertest Server = new Nettynioserverhandlertest (); Server.serve (5555); } private static Class Inbounda extends Channelinboundhandleradapter {@Override public void Channelread ( Channelhandlercontext ctx, Object msg) throws Exception {bytebuf buf = (bytebuf) msg; SYSTEM.OUT.PRINTLN ("Inbounda read" +buf.tostring (Charset.forname ("UTF-8")); Super.channelread (CTX, msg); }} private static class Inboundb extends Channelinboundhandleradapter {@Override public void Channel Read (Channelhandlercontext ctx, Object msg) throws Exception {bytebuf buf = (bytebuf) msg; SYSTEM.OUT.PRINTLN ("inboundb read" +buf.tostring (Charset.forname ("UTF-8")); Super.channelread (CTX, msg); Starting from pipeline's tail, look for outbound Ctx.channel (). Writeandflush (buffer); }} Private StatIC class Inboundc extends Channelinboundhandleradapter {@Override public void Channelread (Channelhandlercon Text ctx, Object msg) throws Exception {bytebuf buf = (bytebuf) msg; SYSTEM.OUT.PRINTLN ("Inboundc read" +buf.tostring (Charset.forname ("UTF-8")); Super.channelread (CTX, msg); This will look forward to outbound//ctx.writeandflush (buffer) from the current handler; }} private static class Outbounda extends Channeloutboundhandleradapter {@Override public void write (Channelhandlercontext ctx, Object msg, Channelpromise Promise) throws Exception {System.out.println ("Outbound A write "); Super.write (CTX, MSG, promise); }} private static class Outboundb extends Channeloutboundhandleradapter {@Override public void write (Channelhandlercontext ctx, Object msg, Channelpromise Promise) throws Exception {System.out.println ("Outbound B write "); Super.write (cTX, MSG, promise); }} private static class Outboundc extends Channeloutboundhandleradapter {@Override public void write (Channelhandlercontext ctx, Object msg, Channelpromise Promise) throws Exception {System.out.println ("Outbound C write "); Super.write (CTX, MSG, promise); } }}
Finally there is an example of a TCP sticky packet, interested can also try it yourself, the code is not posted up, you can refer to the back of the GitHub connection.
Reference
- "Netty Combat"
- "Netty Authoritative Guide"
- GitHub Code Links
On the Netty of Java IO and the NIO server