Netty (b) Why is Netty high performance from a threading model perspective?

Source: Internet
Author: User

Objective

In the previous Springboot integrated long connected heartbeat mechanism, the article recognized Netty.

But actually only can use, why should use Netty? What advantages does it have? These are not really clear.

This article comes from the historical source said.

Traditional IO

Before the advent of Netty and NIO, we wrote IO applications that used the java.io.* packages provided.

For example, the following pseudo-code:

ServeSocket serverSocket = new ServeSocket(8080);Socket socket = serverSocket.accept() ;BufferReader in = .... ;String request ; while((request = in.readLine()) != null){    new Thread(new Task()).start()}

Presumably, the main thing to say is that a thread can handle only one connection .

If it is 100 client connections then you have to open 100 threads, 1000 that will have 1000 threads.

To know that thread resources are valuable, each creation consumes, and each thread has to allocate the corresponding stack memory for it.

Even if we give the JVM enough memory, the context switches that a large number of threads bring are unbearable.

And the traditional IO is blocking mode, each response must be initiated IO request, processing requests to complete and return at the same time, the direct result is poor performance, low throughput.

Reactor model

So the high-performance IO model commonly used in the industry is Reactor .

It is an asynchronous, non-blocking event-driven model.

It also usually behaves in the following three ways:

Single Thread

Can be seen:

It is a thread that receives a connection from the client and distributes the request to the corresponding event processing handler, which is completely asynchronous and non-blocking, and there is no problem with shared resources at all. So in theory, the throughput is good, too.

However, because it is a thread, the utilization of multi-core CPU is not high, once there is a large number of client connections to the performance of the inevitable decline, and even a large number of requests can not respond.
The worst case scenario is that once the thread has not been processed, the entire service will be unavailable!

Multithreading

Therefore, a multithreaded model is produced.

In fact, the biggest improvement is to change the original event processing to multithreading.

Can be based on Java's own thread pool implementation, so that the performance of the processing of a large number of requests is huge.

Nonetheless, there is a single point in theory: the thread that handles the client connection.

Because most server applications are more or less connected to some business, such as authentication, there is still a performance problem with a thread when the client is connected more and more often.

Then there is the following threading model.

Master-Slave multithreading

The model also changes the thread of the client connection to multi-threading, called the main thread.

It is also a multiple child thread to handle event responses, so that both connectivity and events are high performance.

Netty implementation

The above talk about so much actually Netty the threading model with the similar.

Before we go back to Springboot, we integrate the service-side code in the long-connected heartbeat mechanism:

    private EventLoopGroup boss = new NioEventLoopGroup();    private EventLoopGroup work = new NioEventLoopGroup();    /**     * 启动 Netty     *     * @return     * @throws InterruptedException     */    @PostConstruct    public void start() throws InterruptedException {        ServerBootstrap bootstrap = new ServerBootstrap()                .group(boss, work)                .channel(NioServerSocketChannel.class)                .localAddress(new InetSocketAddress(nettyPort))                //保持长连接                .childOption(ChannelOption.SO_KEEPALIVE, true)                .childHandler(new HeartbeatInitializer());        ChannelFuture future = bootstrap.bind().sync();        if (future.isSuccess()) {            LOGGER.info("启动 Netty 成功");        }    }

In fact, the boss here is equivalent to the thread pool that handles client connections in the Reactor model.

Work is naturally the thread pool that handles events.

So how do you implement the three models above? In fact, it is very simple:

Single Threading Model:

private EventLoopGroup group = new NioEventLoopGroup();ServerBootstrap bootstrap = new ServerBootstrap()                .group(group)                .childHandler(new HeartbeatInitializer());

Multithreaded models:

private EventLoopGroup boss = new NioEventLoopGroup(1);private EventLoopGroup work = new NioEventLoopGroup();ServerBootstrap bootstrap = new ServerBootstrap()                .group(boss,work)                .childHandler(new HeartbeatInitializer());

Master-Slave Multithreading:

private EventLoopGroup boss = new NioEventLoopGroup();private EventLoopGroup work = new NioEventLoopGroup();ServerBootstrap bootstrap = new ServerBootstrap()                .group(boss,work)                .childHandler(new HeartbeatInitializer());

I believe we can see it.

Summarize

In fact, after looking at the Netty threading model can we do high-performance applications at ordinary times to bring some inspiration?

I think it's possible:

    • Interface synchronous-to-asynchronous processing.
    • Callback notification results.
    • Multithreading improves concurrency efficiency.

That's all, just doing this will bring other problems:

    • How can transactions be guaranteed after asynchronous?
    • What happens when the callback fails?
    • The problem of context switching and shared resources caused by multithreading.

This is a game of the process, want to achieve a maximum efficiency of the application is the need for constant running trial and error.

The relevant code above:

Https://github.com/crossoverJie/netty-action

Welcome to the public exchange:

Netty (b) Why is Netty high performance from a threading model perspective?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.