Netty Learning Notes (i)

Source: Internet
Author: User
Tags int size readable throwable
io is divided into synchronous blocking bio, synchronous non-blocking NiO, asynchronous non-blocking Aio

1, BIO
Before the JDK1.4 came out, when we set up a network connection with the Bio mode, we need to start a serversocket on the server, and then start the socket on the client to communicate to the server, by default, the server needs to set up a heap of threads for each request to wait for 4 requests, and the client sends the request before consulting the service Whether the end is wired or not, if it does not, it waits or is rejected, and if so, the client thread waits until the request is completed before continuing.
2. NIO
NiO itself is based on event-driven thinking, primarily to solve the big concurrency problem of bio: In network applications that use synchronous I/O, you must use multithreading if you want to handle multiple client requests simultaneously, or if the client wants to communicate with multiple servers at the same time. In other words, each client request is assigned to a thread to handle separately. While this can meet our requirements, it also poses another problem. Because each thread is created, it allocates a certain amount of memory (also known as working memory) to the thread, and the operating system itself limits the total number of threads. If there are too many requests from the client, the server-side program may be overwhelmed to deny the client's request, and even the servers may become paralyzed. NIO is based on reactor, and when the socket has a stream readable or writable socket, the operating system processes the corresponding notification reference program, and then reads the stream to the buffer or writes to the operating system. In other words, this time, is not a connection will be a processing thread, but a valid request, corresponding to a thread, when the connection has no data, there is no worker thread to deal with.
One of the more important differences between bio and NiO is that when we use the bio, we tend to introduce multiple threads, each connecting a single thread, while NIO uses a single thread or uses only a small number of threads, and each connection has a common one. The most important place for NIO is that when a connection is created, it does not need to correspond to a thread, this connection will be registered on the multiplexer, so all the connections need only one thread can be done, when the multiplexer in this thread polling, found the connection on the request to open a thread to handle, That is, a request for a threading pattern.
In the process of NIO, when a request comes in, open the thread to process, may wait for the backend application resources (JDBC connection, etc.), in fact, this thread is blocked, when the concurrent up, there will be the same problem with the bio. The http/1.1 appears with an HTTP long connection so that, in addition to timing out and specifying the HTTP header for a particular shutdown, the link is always open, so that it can evolve further in NIO processing, and a resource pool or queue can be implemented in the backend resources, and when requested, The open thread sends the request and request data to the back-end resource pool or the queue to return. And in the overall place to maintain the site (which the connection of which request, etc.), so that the previous thread can still accept other requests, the application of the latter end of the processing only need to execute the queue within the can, This enables request processing and backend applications to be asynchronous. When the back end is processed, the scene is generated to the global place, and the response is realized, and this realizes the asynchronous processing
3, AIO
Unlike NIO, the read or write method of the API must be invoked directly when reading and writing. Both of these methods are asynchronous, and for read operations, when there is a stream to read, the operating system transmits the readable buffer to the Read method and notifies the application; for write operations, the operating system proactively notifies the application when the operating system writes the stream passed by the write method to completion. It can be understood that the Read/write method is asynchronous and will invoke the callback function when it is finished. In JDK1.7, this part of the content is called nio.2

Because the API of NIO is more complicated to use for inconvenience, it appears netty, and he encapsulates many of NiO's APIs, which is more convenient for us to use.
jdk1.7, Netty-all-4.0.25.final.jar service-side code

package com.swk.common.netty;
Import Io.netty.channel.ChannelHandlerContext;

Import Io.netty.channel.ChannelInboundHandlerAdapter;
 /** * Created by Fuyuwei on 2017/9/15. */public class Nettyserverhandler extends channelinboundhandleradapter{@Override public void Channelread (Channel
        Handlercontext channelhandlercontext, Object o) throws Exception {System.out.println ("server channelread ...");
        System.out.println (Channelhandlercontext.channel (). Remoteaddress () + "--server:" +o.tostring ());
        Channelhandlercontext.write ("Server write:" +o);
    Channelhandlercontext.flush (); @Override public void Exceptioncaught (Channelhandlercontext channelhandlercontext, Throwable throwable) throws
        Exception {throwable.printstacktrace ();
    Channelhandlercontext.close (); }
}
Package com.swk.common.netty;
Import Io.netty.bootstrap.ServerBootstrap;
Import Io.netty.channel.ChannelFuture;
Import Io.netty.channel.ChannelInitializer;
Import io.netty.channel.ChannelOption;
Import Io.netty.channel.EventLoopGroup;
Import Io.netty.channel.nio.NioEventLoopGroup;
Import Io.netty.channel.socket.SocketChannel;
Import Io.netty.channel.socket.nio.NioServerSocketChannel;
Import Io.netty.handler.codec.string.StringDecoder;

Import Io.netty.handler.codec.string.StringEncoder;
 /** * Created by Fuyuwei on 2017/9/15.

    * * Public class Nettyserver {private int port;
    public nettyserver (int port) {this.port = port;
        public void Start () {Eventloopgroup bossgroup = new Nioeventloopgroup (1);
        Eventloopgroup Workergroup = new Nioeventloopgroup ();
                    try {serverbootstrap serverbootstrap = new Serverbootstrap (). Group (Bossgroup,workergroup).
                    Channel (Nioserversocketchannel.class). LocalAddress (port). Childhandler (New channelinitializer<socketchannel> () {@Override prote CTED void Initchannel (Socketchannel socketchannel) throws Exception {Socketchannel.pipeline (). addlast
                    ("Decoder", New Stringdecoder ());
                    Socketchannel.pipeline (). AddLast ("encoder", New Stringencoder ());
                Socketchannel.pipeline (). AddLast (New Nettyserverhandler ());
            }). Option (Channeloption.so_backlog, 128). Childoption (Channeloption.so_keepalive,true);
            Bind port, start receiving incoming connection channelfuture future = Serverbootstrap.bind (port). sync ();
            System.out.println ("Server Start listen at:" +port);
        Future.channel (). Closefuture (). sync ();
            catch (Interruptedexception e) {e.printstacktrace ();
            Bossgroup.shutdowngracefully ();
        Workergroup.shutdowngracefully (); }} public static void Main (string[] args) {int port;
        if (Args.length > 0) {port = Integer.parseint (Args[0]);
        }else{port = 8080;

    New Nettyserver (Port). Start ();
 }
}
Client Code
Package com.swk.common.netty;

Import Io.netty.channel.ChannelHandlerContext;
Import Io.netty.channel.ChannelInboundHandlerAdapter;

/**
 * Created by Fuyuwei on 2017/9/15.
 */Public
class Nettyclienthandler extends channelinboundhandleradapter{
    @Override public
    Void Channelactive (Channelhandlercontext ctx) throws Exception {System.out.println
        ("Nettyclienthandler Active ...");

    @Override public
    void Channelread (Channelhandlercontext ctx, Object msg) throws Exception {
        SYSTEM.OUT.PRINTLN ("Nettyclienthandler Read message:" +msg);
    }

    @Override public
    void Exceptioncaught (Channelhandlercontext ctx, Throwable cause) throws Exception {
        Cause.printstacktrace ();
        Ctx.close ();
    }

Package com.swk.common.netty;
Import Io.netty.bootstrap.Bootstrap;
Import io.netty.channel.*;
Import Io.netty.channel.nio.NioEventLoopGroup;
Import Io.netty.channel.socket.SocketChannel;
Import Io.netty.channel.socket.nio.NioSocketChannel;
Import Io.netty.handler.codec.string.StringDecoder;

Import Io.netty.handler.codec.string.StringEncoder;
 /** * Created by Fuyuwei on 2017/9/15.
    * * public class Nettyclient {static final String HOST = "127.0.0.1";
    static final int PORT = 8080;

    static final int SIZE = 256;
        public static void Main (string[] args) {Eventloopgroup group = new Nioeventloopgroup ();
            try {Bootstrap b = new Bootstrap ();
                    B.group (Group). Channel (Niosocketchannel.class).
                    Option (channeloption.tcp_nodelay,true). Handler (new channelinitializer<socketchannel> () {@Override protecte d void Initchannel (Socketchannel socketchannel) throWS Exception {channelpipeline p = socketchannel.pipeline ();
                            P.addlast ("Decoder", New Stringdecoder ());
                            P.addlast ("encoder", New Stringencoder ());
                        P.addlast (New Nettyclienthandler ());
            }
                    });
            Channelfuture future = B.connect (Host,port);
            Future.channel (). Writeandflush ("Hello Netty server,i am a common client");
        Future.channel (). Closefuture (). sync ();
        catch (Exception e) {e.printstacktrace ();
        finally {group.shutdowngracefully ();
 }
    }
}
Run Results

Start the server first, then run the client, the server output is as follows

The client output is as follows

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.