Tag: encode echo message to send context count package. SH Maintenance Code
Netty User Guide I. Preface 1. Questions
In today's world we need to use common software or libraries to communicate with other components, such as using an HTTP client to get information from a server, or invoking a remote method through a network service. However, general-purpose protocols and their implementations generally do not have good scalability. So the question seems to be how we don't use generic HTTP servers to transfer large files, e-mails, real-life data, multimedia data, and so on. What we need is a protocol implementation that is optimized for a specific problem. For example, we may need to re-implement an HTTP server to communicate with AJAX clients. Another scenario is the need to deal with legacy protocols to ensure compatibility with legacy systems. The key to these examples is how to implement the protocol quickly without losing the stability and performance of the target system.
2. Solution
Netty is an asynchronous event-driven network application framework that can be used to rapidly develop maintainable, high-performance, extensible protocol servers and clients.
In other words, Netty is a NIO-based client and server framework that can easily and quickly develop network applications such as protocols for clients and servers. It greatly simplifies network programming such as TCP and UDP servers.
Ii. beginning 1. Writing Discardserver
The simplest protocol is not "Hello World", but discards it. The discard protocol discards any received data without any response.
To implement the discard protocol, all you have to do is discard any received data. Starting with the implementation of handler, the handler handles I/O events generated by Netty.
package Io.netty.example.discard;import Io.netty.buffer.bytebuf;import Io.netty.channel.channelhandlercontext;import io.netty.channel.channelinboundhandleradapter;/** * Handles A Server-side Channel. */public class Discardserverhandler extends Channelinboundhandleradapter {//(1) @Override public void Channelread ( Channelhandlercontext ctx, Object msg) {//(2)//Discard the received data silently. ((BYTEBUF) msg). Release (); (3)} @Override public void Exceptioncaught (Channelhandlercontext ctx, throwable cause) {//(4)//Clo Se the connection when a exception is raised. Cause.printstacktrace (); Ctx.close (); }}
DiscardServerHandler
Inherited ChannelInboundHandlerAdapter
, and he has achieved ChannelInboundHandler
, ChannelInboundHandler
provides a different way of dealing with events, you can according to the need to overwrite the corresponding method. ChannelInboundHandlerAdapter
provides some default implementations, so in this case you just have to inherit it.
- Overwrite the method
channelRead
, which is called when Netty receives data from the client. The type of message is ByteBuf
.
ByteBuf
is a reference count object that needs to be released manually. It is important to note that handler needs to release any reference count objects that are passed to him. Typically channelRead()
, the method is typically implemented as follows:
@Overridepublic void channelRead(ChannelHandlerContext ctx, Object msg) { try { // Do something with msg } finally { ReferenceCountUtil.release(msg); }}
- The method is called because of an IO error netty throwing an exception or handle handling event throws an exception
exceptionCaught()
. In most cases, it is necessary to log the exception and close the associated connection channel
.
So far the discard service has been implemented, and the next step is to implement a main()
method to start the service.
Package Io.netty.example.discard; Import Io.netty.bootstrap.serverbootstrap;import Io.netty.channel.channelfuture;import Io.netty.channel.channelinitializer;import Io.netty.channel.channeloption;import Io.netty.channel.EventLoopGroup ; Import Io.netty.channel.nio.nioeventloopgroup;import Io.netty.channel.socket.socketchannel;import Io.netty.channel.socket.nio.NioServerSocketChannel; /** * Discards any incoming data. */public class Discardserver {private int port; public discardserver (int port) {this.port = port; } public void Run () throws Exception {Eventloopgroup bossgroup = new Nioeventloopgroup ();//(1) Eve Ntloopgroup Workergroup = new Nioeventloopgroup (); try {serverbootstrap b = new Serverbootstrap ();//(2) B.group (Bossgroup, Workergroup) . Channel (Nioserversocketchannel.class)//(3). Childhandler (New channelinitializer<socketchannel> () {// (4) @OverriDe public void Initchannel (Socketchannel ch) throws Exception {Ch.pipeline (). AddLast ( New Discardserverhandler ()); }}). option (Channeloption.so_backlog, +)//(5). Childoption (channeloption . So_keepalive, True); (6)//Bind and start to accept incoming connections. Channelfuture f = b.bind (port). sync (); (7)//Wait until the server socket is closed. In this example, the This does is not happen, but can do it to gracefully//Shut down your server. F.channel (). Closefuture (). sync (); } finally {workergroup.shutdowngracefully (); Bossgroup.shutdowngracefully (); }} public static void Main (string[] args) throws Exception {int port; if (Args.length > 0) {port = Integer.parseint (Args[0]); } else {port = 8080; } New Discardserver (port). Run (); }}
NioEventLoopGroup
is a multi-threaded event loop used to handle I/O operations. Netty offers a variety of implementations for different modes of communication EventLoopGroup
. In this case, we only need to implement the server-side application, so it takes two NioEventLoopGroup
. The first is commonly referred to as a boss
link request to receive a client. The second one is called worker
, which handles boss
I/O requests to the received connection and registers the received connection with worker
.
ServerBootstrap
is the auxiliary class used to create the server.
- Use
NioServerSocketChannel
the class to instantiate to channel
receive the connection request.
- The handler set here will be each new
channel
call, a ChannelInitializer
special handler used to configure a new channel. In this example, we will DiscardServerHandler
add the pipeline to the new channel. As the complexity of the application increases, more handler may be added to the pipeline.
- You can
option()
set some parameters for the channel by method.
option()
method is used to set NioServerSocketChannel
parameters, but childOption()
to set parameters for the received connection.
- All that is left is to bind the port and start the service.
2. Test whether the Discardserver is successful
The simplest way is to use the Telnet command. For example, input telnet localhost 8080
. Discarserver discards any accepted data, we can print out the data received by Discardserver.
@Overridepublic void channelRead(ChannelHandlerContext ctx, Object msg) { ByteBuf in = (ByteBuf) msg; try { while (in.isReadable()) { // (1) System.out.print((char) in.readByte()); System.out.flush(); } } finally { ReferenceCountUtil.release(msg); // (2) }}
- Loops can be equivalent to
System.out.println(in.toString(io.netty.util.CharsetUtil.US_ASCII))
.
- Equivalent to
in.release()
3. Write an echo Server
A server usually needs to respond to a request, and an echo service simply needs to return the requested content to the client.
@Overridepublic void channelRead(ChannelHandlerContext ctx, Object msg) { ctx.write(msg); // (1) ctx.flush(); // (2)}
ChannelHandlerContext
The object provides a variety of operations for the departure IO time. write(Object)
the data is sent to the client by calling the method. There is no manual release of MSG here, because when the MSG is written, Netty automatically releases it.
ctx.write(Object)
It does not write the data externally, but in the internal buffer, the data is ctx.flush()
brushed out to the outside through the call. The same effect can be achieved with a simple call ctx.wirteAndFlush(msg)
.
4. Write a timer Server
The time protocol differs from the previous example in that it sends a 32-bit integer, does not receive any requests, and immediately closes the connection as soon as the message is sent.
Because we do not need to receive any data and send the data when the connection is established, the method cannot be used channelRead()
. Need to overwrite channelActive()
method
package Io.netty.example.time;public class Timeserverhandler extends Channelinboundhandleradapter {@Override public void channelactive (final channelhandlercontext ctx) {//(1) Final Bytebuf time = Ctx.alloc (). buffer (4); (2) Time.writeint ((int) (System.currenttimemillis ()/1000L + 2208988800L)); Final Channelfuture f = ctx.writeandflush (time); (3) F.addlistener (new Channelfuturelistener () {@Override public void Operationcomplete (Ch Annelfuture future) {assert f = = future; Ctx.close (); } }); (4)} @Override public void Exceptioncaught (Channelhandlercontext ctx, throwable cause) {Cause.pri Ntstacktrace (); Ctx.close (); }}
When a connection is established, the activeChannel()
method is called and then a 32-bit integer is written.
In order to send a new message, a buffer needs to be allocated. Allocates a ctx.alloc()
buffer by calling get ByteBufAllocator
.
Buffer in Netty does not need to be called like Java NiO flip()
, because buffer in Netty has two pointers for read and write operations, respectively. When the write pointer is moved while the read pointer does not move, the read-write pointer represents the beginning and end of the data, respectively.
It is also necessary to note that ctx.write()
an object is returned that ChannelFuture
represents an IO operation that has not yet occurred. This means that any request operation may not have occurred because in Netty, all operations are asynchronous. For example, the following code might close the connection before sending the message:
Channel ch = ...;ch.writeAndFlush(message);ch.close();
So the call is made ChannelFuture
before completion close()
, and when the operation is complete, ChannelFuture
his listener is notified. The close()
connection may not be closed immediately.
In this example, an anonymous inner class is added as a listener to close the connection. You can also use a predefined listener:
f.addListener(ChannelFutureListener.CLOSE);
5.Time Client
The
differs from the discard and echo,time protocols by requiring a client to convert a 32-bit integer to a date. The biggest difference between clients and servers in Netty is the use of different BootStrap
and Channel
realities.
Package Io.netty.example.time;public class Timeclient {public static void main (string[] args) throws Exception { String host = Args[0]; int port = integer.parseint (args[1]); Eventloopgroup Workergroup = new Nioeventloopgroup (); try {Bootstrap b = new Bootstrap ();//(1) b.group (workergroup);//(2) B.channel (NIOSOC Ketchannel.class); (3) B.option (channeloption.so_keepalive, true); (4) B.handler (new channelinitializer<socketchannel> () {@Override public void Initchannel (Socketchannel ch) throws Exception {Ch.pipeline (). AddLast (New Timeclienthandler ()); } }); Start the client. Channelfuture f = b.connect (host, port). sync (); (5)//Wait until the connection is closed. F.channel (). Closefuture (). sync (); } finally {Workergroup.shutdowngracefullY (); } }}
BootStap
and ServerBootStrap
very similar, but it is for the client.
- Just specify one
EventLoopGroup
and no boss is required in the client.
- Use
NioSocketChannel
rather than NioServerSocketChannel
.
- No need
childOption()
.
- Use the
connect()
method instead ofbind()
In TimeClientHandler
, the integer is translated into the type of the date format.
package io.netty.example.time;import java.util.Date;public class TimeClientHandler extends ChannelInboundHandlerAdapter { @Override public void channelRead(ChannelHandlerContext ctx, Object msg) { ByteBuf m = (ByteBuf) msg; // (1) try { long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L; System.out.println(new Date(currentTimeMillis)); ctx.close(); } finally { m.release(); } } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { cause.printStackTrace(); ctx.close(); }}
6. Handle stream-based transport issues.
The TCP/IP protocol receives data and stores it in the socket buffer, but the buffer is not a queue of packets, but a queue of bytes, which means that you send two messages, but the operating system does not consider two messages but a set of bytes. So when reading the data is not sure to read the data sent by the other side.
In the time protocol, the m.readUnsignedInt()
buffer needs to be four bytes in the call, and an exception will be thrown if four bytes have not been received in the buffer.
The workaround is to add one more ChannelHandle
to the ChannelPipeline
. The handler specifically deals with coding issues.
package io.netty.example.time;public class TimeDecoder extends ByteToMessageDecoder { // (1) @Override protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2) if (in.readableBytes() < 4) { return; // (3) } out.add(in.readBytes(4)); // (4) }}
ByteToMessageDecoder
is ChannelInboundHandler
an implementation that is specifically used for coding problems.
- When the new data arrives, Netty calls the Decode method, and internally maintains a cumulative buffer.
- When there is not enough data in the cumulative buffer, you can not add any data in the out. The Decode method is called when the new data arrives Netty.
- If you
decode()
add an object to the out, it means that the encoded information is successful. Netty discards some of the data that has been read in buffer.
TimeDecoder
add to ChannelPipeline
medium:
b.handler(new ChannelInitializer<SocketChannel>() { @Override public void initChannel(SocketChannel ch) throws Exception { ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler()); }});
Another easier way is to useReplayingDecoder
public class TimeDecoder extends ReplayingDecoder<Void> { @Override protected void decode( ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { out.add(in.readBytes(4)); }}
When in.readBytes(4)
the call throws an exception, ReplayingDecoder
it catches the exception and executes repeatedlydecode()
7. Use Pojo instead of BYTEBUF
In the previous time service, it was the data structure using BYTEBUF as the protocol directly. Using the Pojo object in handler, you can split the code that was extracted from bytebuf Pojo.
First define the Unixtime class:
package io.netty.example.time;import java.util.Date;public class UnixTime { private final long value; public UnixTime() { this(System.currentTimeMillis() / 1000L + 2208988800L); } public UnixTime(long value) { this.value = value; } public long value() { return value; } @Override public String toString() { return new Date((value() - 2208988800L) * 1000L).toString(); }}
TimeDecoder
decoding the resulting UnixTime
object in
@Overrideprotected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { if (in.readableBytes() < 4) { return; } out.add(new UnixTime(in.readUnsignedInt()));}
TimeClientHandler
It is no longer necessary to use ByteBuf
it in.
On the server side, first change theTimeServerHandler
@Overridepublic void channelActive(ChannelHandlerContext ctx) { ChannelFuture f = ctx.writeAndFlush(new UnixTime()); f.addListener(ChannelFutureListener.CLOSE);}
There is also a need to create an encoder that will be UnixTime
converted ByteBuf
for network transmission
public class TimeEncoder extends MessageToByteEncoder<UnixTime> { @Override protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) { out.writeInt((int)msg.value()); }}
Netty User Guide