Chapter Content
- Transports (transmission)
- NIO (non-blocking io,new io), OIO (old io,blocking io), local (native), Embedded (embedded)
- Use-case (use case)
- APIs (interfaces)
A very important work for network applications is data transfer. The process of data transfer varies depending on which vehicle is used, but the way it is transmitted is the same: it is transmitted in byte-code.
Java Development Network Program data transfer process and the way is abstracted, we do not need to pay attention to the underlying interface. You can only use Java APIs or other network frameworks such as Netty to achieve the purpose of data transfer.
Both the sending and receiving data are byte codes. Nothing more,nothing less.
Assuming that you have previously worked with a Java-provided network interface, you may have experienced a situation where you would like to switch from blocking to non-clogging, and such a switchover is more difficult due to the very large differences in the APIs that clog IO and non-clogging IO use. Netty provides the upper layer of the transport implementation interface to make such a situation easier.
We can make the code we write as generic as possible, without relying on some APIs that implement it. When we want to switch the mode of transmission, we do not need to spend very much effort and time to refactor the code.
This chapter will describe the unified API and how to use them, and will take the Netty API and Java API to tell you why Netty is easier to use.
This chapter also provides some high-quality use-case code for optimal usage of Netty. No other network framework or network programming experience is required to use Netty. If any, it is only helpful to understand Netty. But it's not necessary. Let's take a look at the transmission work in the world.
4.1 Case Study: Switching modes of transmission
In order for you to imagine how to transport, I will start with a simple application, the application does nothing, just accept the client connection and send "hi!" The string message goes to the client and disconnects when it is finished.
I don't specifically explain the implementation of this process, it's just a sample.
4.1.1 using Java I/O and NiO
We will not use Netty to implement this example, the following code is a sample implementation using the blocking IO:
Package Netty.in.action;import Java.io.ioexception;import Java.io.outputstream;import java.net.ServerSocket;import Java.net.socket;import java.nio.charset.charset;/** * Blocking networking without Netty * @author C.K * */public class Pla inoioserver {public void server (Int. port) throws Exception {//bind server to portfinal serversocket socket = new SERVERSOC Ket (port); try {while (true) {//accept connectionfinal Socket clientsocket = socket.accept (); SYSTEM.OUT.PRINTLN ("Accepted connection from" + clientsocket),//create new thread to handle connectionnew thread (new Runn Able () {@Overridepublic void run () {OutputStream out;try{out = Clientsocket.getoutputstream ();//write message to Connected Clientout.write ("hi!\r\n". GetBytes (Charset.forname ("UTF-8")); Out.flush ();//close Connection Once Message written and Flushedclientsocket.close ();} catch (IOException e) {try {clientsocket.close ();} catch (IOException E1) {e1.printstacktrace ();}}}). Start ();//start thread to begin Handling}}catch (Exception E{e.printstacktrace (); Socket.close ();}}}
The method above is very concise. However, such a blockage mode in the case of large connections will have a very serious problem, such as the client connection time-out, the server response is severely delayed. To solve this situation, we were able to use the asynchronous network to process all the concurrent connections, but the problem was that the NIO and Oio APIs were completely different, so a Web application developed with Oio wanted to use NiO refactoring code that was almost another development.
The following code is a sample implemented using Java NIO:
Package Netty.in.action;import Java.net.inetsocketaddress;import Java.net.serversocket;import java.nio.ByteBuffer; Import Java.nio.channels.selectionkey;import Java.nio.channels.selector;import Java.nio.channels.serversocketchannel;import Java.nio.channels.socketchannel;import java.util.Iterator;/** * Asynchronous networking without Netty * @author C.K * */public class Plainnioserver {public void server (int port) throws E xception {System.out.println ("Listening for connections in port" + port);//open Selector that handles Channelsselector SE Lector = Selector.open ();//open Serversocketchannelserversocketchannel Serverchannel = ServerSocketChannel.open ();// Get Serversocketserversocket ServerSocket = Serverchannel.socket ();//bind Server to Portserversocket.bind (new Inetsocketaddress (port));//set to Non-blockingserverchannel.configureblocking (false);//register ServerSocket to Selector and specify that it's interested in new accepted Clientsserverchannel.register (selector, selectionkey.op_accept); final Bytebuffer msg = bytebuffer.wrap ("hi!\r\n". GetBytes ()); (true) {//wait for new events Process. This would block until something happensint n = selector.select (); if (n > 0) {//obtain All Selectionkey instances the R eceived eventsiterator<selectionkey> iter = Selector.selectedkeys (). iterator (); while (Iter.hasnext ()) { Selectionkey key = Iter.next (); Iter.remove (); try {//check if event is because new client ready to get Acceptedif (Key.isa Cceptable ()) {Serversocketchannel Server = (Serversocketchannel) key.channel (); Socketchannel client = Server.accept (); SYSTEM.OUT.PRINTLN ("Accepted connection from" + client); Client.configureblocking (false);//accept Client and register it to Selectorclient.register (selector, Selectionkey.op_write, msg.duplicate ());} Check if event was because socket are ready to write Dataif (key.iswritable ()) {Socketchannel client = (socketchannel) ke Y.channel (); Bytebuffer buff = (bytebuffer) key.attachment ();//write data toConnected Clientwhile (buff.hasremaining ()) {if (client.write (buff) = = 0) {break;}} Client.close ();//close client}} catch (Exception e) {key.cancel (); Key.channel (). Close ();}}}}}
As you can see. Even though they do the same thing, the code is completely different. Below we will use Netty to achieve the same function.
Using I/O and NiO in 4.1.2 Netty
The following code is a blocking IO sample written using Netty as a network framework:
Package Netty.in.action;import Java.net.inetsocketaddress;import Io.netty.bootstrap.serverbootstrap;import Io.netty.buffer.bytebuf;import Io.netty.buffer.unpooled;import Io.netty.channel.channel;import Io.netty.channel.channelfuture;import Io.netty.channel.channelfuturelistener;import Io.netty.channel.channelhandlercontext;import Io.netty.channel.channelinboundhandleradapter;import Io.netty.channel.channelinitializer;import Io.netty.channel.eventloopgroup;import Io.netty.channel.nio.nioeventloopgroup;import Io.netty.channel.socket.oio.oioserversocketchannel;import Io.netty.util.charsetutil;public class Nettyoioserver {public void server (int port) throws Exception {final Bytebuf buf = Unpooled.unreleasablebuffer (Unpooled.copiedbuffer ("hi!\r\n", Charsetutil.utf_8));//Event loop Group Eventloopgroup Group = new Nioeventloopgroup (); try {//To boot the server configuration Serverbootstrap b = new Serverbootstrap ();//use OIO Jam mode B.group (group). Channel ( Oioserversocketchannel.class). LocalAddress (New inetsocketaddress (port))//Specify CHAnnelinitializer Initialize Handlers.childhandler (new channelinitializer<channel> () {@Overrideprotected void Initchannel (Channel ch) throws Exception {//Joins an "inbound" handler to Channelpipelinech.pipeline (). AddLast (New Channelinboundhandleradapter () {@Overridepublic void channelactive (Channelhandlercontext ctx) throws Exception {//after connection , write the message to the client, and then close the connection Ctx.writeandflush (Buf.duplicate ()). AddListener (Channelfuturelistener.close);});}); /BIND server Accept connection Channelfuture f = b.bind (). sync (); F.channel (). Closefuture (). sync (); catch (Exception e) {//Frees all Resources group.shutdowngracefully ();}}}
The code above implements the same functionality, but the structure is clear. This is only one of the advantages of Netty.
Asynchronous support in 4.1.3 Netty
The following code is implemented asynchronously using Netty. It is easy to see that switching from Oio to NIO using Netty is convenient.
Package Netty.in.action;import Io.netty.bootstrap.serverbootstrap;import Io.netty.buffer.bytebuf;import Io.netty.buffer.unpooled;import Io.netty.channel.channelfuture;import Io.netty.channel.ChannelFutureListener; Import Io.netty.channel.channelhandlercontext;import Io.netty.channel.channelinboundhandleradapter;import Io.netty.channel.channelinitializer;import Io.netty.channel.eventloopgroup;import Io.netty.channel.nio.nioeventloopgroup;import Io.netty.channel.socket.socketchannel;import Io.netty.channel.socket.nio.nioserversocketchannel;import Io.netty.util.charsetutil;import Java.net.inetsocketaddress;public class Nettynioserver {public void server (int port) throws Exception {final Bytebuf buf = Unpooled.unreleasablebuffer (Unpooled.copiedbuffer ("hi!\r\n", Charsetutil.utf_8));//Event loop Group Eventloopgroup Group = new Nioeventloopgroup (); try {//To boot the server configuration Serverbootstrap b = new Serverbootstrap ();//Use NiO asynchronous Mode B.group (group). Channel ( Nioserversocketchannel.class). localaddress (New inetsocketaddrESS (port)//Specify Channelinitializer initialization Handlers.childhandler (new channelinitializer<socketchannel> () {@ overrideprotected void Initchannel (Socketchannel ch) throws Exception {//Add an "inbound" handler to Channelpipelinech.pipeline ( ). AddLast (New Channelinboundhandleradapter () {@Overridepublic void channelactive (Channelhandlercontext ctx) throws Exception {//after connection, write message to client. After writing, close the connection Ctx.writeandflush (Buf.duplicate ()). AddListener (Channelfuturelistener.close);});}); /BIND server Accept connection Channelfuture f = b.bind (). sync (); F.channel (). Closefuture (). sync (); catch (Exception e) {//Frees all Resources group.shutdowngracefully ();}}}
Since Netty uses the same API for every transmission, it doesn't care what you use to implement it. The Netty is transmitted by manipulating the channel interface and Channelpipeline, Channelhandler.
4.2 Transport API
The core of the transport API is the channel interface. It is used for all outbound operations. The class hierarchy for channel interfaces such as the following
As you can see, each channel is assigned a channelpipeline and a channelconfig.
Channelconfig is responsible for setting up and storing the configuration. and agree to update them during execution.
The transport generally has specific configuration settings. Only works on the transmission, no other implementations. Channelpipeline accommodates the Channelhandler instances that are used, and these channelhandler process the inbound and outbound data passed by the channel. The implementation of Channelhandler agrees that you change the data status and transmission, this book has chapters specific explanation Channelhandler,channelhandler is Netty's key concept.
Now we can use Channelhandler to do some of the following things:
- Transferring data from one format to another in a data transfer format
- Exception notification
- Get notified when channel becomes valid or invalid
- Notification when channel is registered or cancelled from EventLoop
- Notifying users of specific events
These Channelhandler instances are added to Channelpipeline and run sequentially in Channelpipeline.
It is similar to a chain, and readers who have used servlets may be easier to understand.
Channelpipeline implements the Interceptor filter pattern, which means we connect different channelhandler to intercept and process channelpipeline data or events. The ability to think of Channelpipeline as a Unix conduit. It agrees with different command chains (Channelhandler equivalent to commands). You can also be able to add Channelhandler instances to channelpipeline or remove them from channelpipeline at execution time, which can help us build a highly flexible Netty program. In addition By visiting the designated Channelpipeline and Channelconfig, you can operate on the channel itself.
The channel provides a very many methods. For example, the following list:
- EventLoop (), returns the EventLoop assigned to the channel
- Pipeline (), returns the Channelpipeline assigned to the channel
- IsActive (). Returns whether the channel is active, the activation description is equivalent to the remote connection
- LocalAddress (). Returns the local socketaddress that have been bound
- Remoteaddress (). Returns the bound remote SocketAddress
- Write (), writes the data to the remote client. Data transmitted over Channelpipeline
These methods are becoming more and more familiar, and it is now only necessary to remember that our operations are performed on the same interface. The high flexibility of the Netty allows you to refactor with different transport implementations.
Writing data to a remote connected client can call the Channel.write () method. For example, the following code:
Channel Channel = ...//create bytebuf that holds data to writebytebuf buf = Unpooled.copiedbuffer ("Your data", Charsetutil . Utf_8);//write datachannelfuture CF = Channel.write (buf);//add Channelfuturelistener to get notified after Write complete Scf.addlistener (New Channelfuturelistener () {@Overridepublic void Operationcomplete (channelfuture future) {//write Operation completes without Errorif (Future.issuccess ()) {System.out.println (. Write successful.);} else {//write operation completed but because of ErrorSystem.err.println (. Write error.); Future.cause (). Printstacktrace ();}});
The channel is thread-safe (thread-safe), which can be operated by several different threads, and in a multithreaded environment, all methods are safe. Because the channel is secure, we store a reference to the channel and use it to write data to the remote connected client when we are learning, using multithreading as well. The following code is a simple multithreaded example:
Final Channel Channel = ...//create bytebuf that holds data to writefinal bytebuf buf = Unpooled.copiedbuffer ("Your Data", Charsetutil.utf_8);//create Runnable which writes data to channelrunnable writer = new Runnable () {@Overridepublic void ru N () {Channel.write (Buf.duplicate ());}};/ /obtain reference to the Executor which uses threads to execute tasksexecutor Executor = Executors.newchachedthreadpool (); Write in one Thread//hand-over-write task to executor for execution in Threadexecutor.execute (writer);//write in Anoth Er thread//hand over another write task to executor for execution in Threadexecutor.execute (writer);
In addition, such a method ensures that the messages written are written in the same order by their methods.
You can refer to the Netty API documentation to learn about the use of all methods.
4.3 Netty includes a transport implementation Netty comes with some implementation of the transport protocol, although it does not support the entire transport protocol, but its own is enough to use.
The transport protocol for the Netty application relies on the underlying protocol, and in this section we will learn about the transport protocol in Netty.
There are several modes of transmission in Netty, such as the following:
- Nio,io.netty.channel.socket.nio, based on the Java.nio.channels Toolkit, uses selectors as the base method.
- Oio,io.netty.channel.socket.oio, based on the Java.net Toolkit, uses clogged streams.
- Local. Io.netty.channel.local, which is used to communicate locally between virtual machines.
- Embedded. Io.netty.channel.embedded. Embedded in the transmission, it agreed to use Channelhandler in the absence of real network transport, can be very useful to test the implementation of Channelhandler.
4.3.1 nio-nonblocking I/O
The NIO transmission is the most frequently used method, and it operates all I/o,nio from Java 1.4 by using selectors to provide a completely asynchronous approach. In NiO. We are able to register a channel or obtain a change in the state of a channel. There are several changes to the channel status:
- A new channel is accepted and ready.
- Channel Connection Complete
- The channel has data and is ready to read.
- Channel sends data out
Once you have processed the changed state, you need to set up their state again, using a thread to check if there is a channel ready. Assume that there is a run-related event. There may be only one event in a register at the same time and ignoring the other.
The actions supported by the selector are defined in Selectionkey, for example, in the following details:
- Op_accept. Get notified when new connections are available
- Op_connect, be notified after connection is complete
- Op_read. Get notified when you are ready to read data
- Op_write, to be notified when writing data to a channel
The NIO transmission in Netty is based on this model to receive and send data. By encapsulating its own interface for use by the user, this completely hides the internal implementation.
As previously mentioned, Netty hides the implementation details inside. Exposing the abstracted API for use, here is the process flowchart:
NIO also has a certain delay in the processing process, and if the number of connections is not large, the delay is usually in the millisecond level. But its throughput is still higher than the OIO model. Netty in the NIO transmission is "zero-file-copy", that is, 0 file replication, such a mechanism can make the program faster, more efficient transfer of content from the file system, 0 replication is our application will not send the data first copied to the JVM stack for processing, Instead, it operates directly from the kernel space. Next we will discuss the OIO transmission. It is clogged.
4.3.2 Oio-old blocking I/O
OIO is the socket interface provided in Java. Java is the first to provide only a blocked socket, blocking can lead to low program performance. The following is the processing flowchart for Oio. If you want to know the details, you can refer to other relevant information.
4.3.3 Local-in VM Transport
Netty includes local transport, which uses the same API for communication between virtual machines, and transfers are completely asynchronous. Each channel uses a unique socketaddress,client to connect by using SocketAddress, and the server will be registered for long-term execution once the channel is closed. It will log out on its own initiative and the client can no longer use it. The behavior of connecting to the local transport server is almost identical to the other transport implementations. One important point to note is that they can only be used on the local server and client. Local unbound no matter what socket, the value provides communication between JVM processes. 4.3.4 Embedded Transport
The Netty also contains embedded transports, which are comparable to the other transport implementations described earlier. Is it a true transmission? What can we do with it if it's not a real transmission? Embedded Transport agrees with the easier use of interactions between different channelhandler, which is also easier to embed into other Channelhandler instances and use them like an auxiliary class. It is generally used to test specific Channelhandler implementations. It is also possible to use some channelhandler in Channelhandler to extend it again, in order to achieve this goal. It comes with a detailed channel implementation. namely: Embeddedchannel. 4.4 When is each transmission mode used?
Do not add more than repeat. Look at the following list:
- OIO. Use when low connections, low latency, clogging
- Nio. Use when high number of connections
- Local. Use when communicating within the same JVM
- Embedded, use when testing Channelhandler
Netty in Action Chinese version-fourth chapter: transports (Transmission)