Netty Combat seven EventLoop and threading model

Source: Internet
Author: User

Simply put, the threading model specifies the key aspects of thread management in the context of the operating system, programming language, framework, or application. Netty's threading model is powerful but easy to use, and, as with Netty's consistent purpose, is designed to simplify your application code while maximizing performance and maintainability.

1. Threading Model Overview

The threading model determines how code is executed, and because we always have to circumvent the side effects that concurrent execution can have, it is important to understand the impact of the concurrency model that is used (and also the threading model with a single thread).

Because computers with multi-core or multiple CPUs are now commonplace, most modern applications utilize sophisticated multithreading techniques to efficiently utilize system resources. In contrast, in the early Java language, the primary way we used multithreading was to create and start new threads on demand to execute concurrent task units--a primitive way to work poorly at high loads. Java 5 then introduced the executor API, whose thread pool greatly improved performance by caching and reusing the thread.

The basic thread pooling pattern can be described as:

--Select a thread from the pool's list of idle threads and assign it to run a committed task (a runnable joke)

--When the task is complete, the thread is returned to the list so that it can be reused.

This model shows that although pooling and reusing threads are relatively straightforward to create and destroy threads for each task, it does not eliminate the overhead associated with context switching, which quickly becomes apparent as the number of threads increases, and intensifies under high load. In addition, because of the overall complexity of the application or the concurrency requirements, other thread-related issues may occur during the declaration cycle of the project.

2. EventLoop interface

Running a task to handle events that occur during the life cycle of a connection is the basic function of any network framework. The corresponding programming construct is often referred to as the event loop-a term that Netty uses interface Io.netty.channel.EventLoop to adapt.

The following code illustrates the basic idea of an event loop, where each task is an instance of runnable.

while(!terminated){            //阻塞,直到有事件已经就绪可被运行            List<Runnable> readyEvents = blockUntilEventReady();            for (Runnable ev:readyEvents){                //循环遍历,并处理所有的事件 ev.run(); } }

Netty's EventLoop is part of a collaborative design that employs two basic APIs: Concurrency and network programming. First, the Io.netty.util.concurrent package is built on the java.util.concurrent package of the JDK to provide the thread executor. Second, the classes in the Io.netty.channel package extend these interfaces/classes in order to interact with the events of the channel.

Shows the generated class hierarchy in this model, a eventloop will be driven by a thread that never changes, and the task (runnable or callable) can be directly submitted to the EventLoop implementation for immediate execution or dispatch execution. Depending on the configuration and available cores, multiple EventLoop instances may be created to optimize the use of resources, and a single eventloop may be assigned to serve multiple channel types.

It is important to note that Netty's eventloop, while inheriting Scheduledexecutorservice, defines only one method, the parent (), The reference used to return the Eventloopgroup to which the instance of the current EventLoop implementation belongs.

Order of execution of events/tasks: Events and tasks are executed in first-in, in-and-out (FIFO) order, which eliminates potential data corruption by ensuring that byte content is always processed in the correct order.

3. I/O and event handling in Netty4

Events triggered by I/O operations will flow through the channelpipeline that have one or more channelhandler installed. Method calls that propagate these events can then be intercepted by Channelhandler and can handle events on demand.

The nature of the event usually determines how it will be handled, and he may pass the data from the network stack to the extent of your application, or reverse, or perform a few distinct actions. However, the processing logic of the event must be sufficiently generic and flexible to handle all possible use cases. Therefore, in Netty4, all I/O operations and events are handled by the thread that has been assigned to EventLoop.

4. I/O operation in Netty3

The threading model used in previous versions only guarantees that inbound (formerly upstream) events are performed in so-called I/O threads (corresponding to EventLoop in Netty4). All outbound (downstream) events are handled by the calling thread, which may be an I/O thread or another thread. It may seem like a good idea at first, but it has been found to be problematic because the outbound events need to be carefully synchronized in the Channelhandler. In short, there is no guarantee that multiple threads will no longer attempt to access outbound events at the same time. This can happen, for example, if you trigger an outbound event for the same channel at the same time by calling the Channel.write () method in a different thread.

When an outbound event triggers an inbound event, it can cause another negative effect. When the Channel.write () method causes an exception, a exceptioncaught event needs to be generated and triggered. However, in the NETTY3 model, because this is an inbound event, you need to execute the code in the calling thread and then hand over the event to the I/O thread to execute, but this will result in an additional context switch.

The threading model used in Netty4 solves this problem by processing all the events that occur in a given eventloop in the same thread. This provides a simpler execution architecture and eliminates the need for synchronization across multiple Channelhandler.

5. JDK Task Scheduling

You need to schedule a task to be executed later (deferred) or periodically. For example, you might want to register a task that is triggered after the client has been connected for 5 minutes. A common use case is to send the heartbeat information to the remote node to check if the connection is still alive. If you don't respond, you know you can close the channel.

Prior to Java5, task scheduling was built on the Java.util.Timer class, which used a background thread and had the same limitations as standard threads. The JDK then provides the java.util.concurrent package, which defines the interface Scheduledexecutorservice.

Although the selection is not much, the implementation of these presets is sufficient to handle most use cases.

The following code shows how to use Scheduledexecutorservice to perform a task after a 60-second delay.

 //create an scheduledexecutorservice whose thread pool has 10 threads        Scheduledexecutorservice Executorservice = Executors.newscheduledthreadpool (10); //create a runnable for scheduling to execute later scheduledfuture<?> future = Executorservice.schedule (
       
        new Runnable () {@Override 
        public void run () {//the message to be printed by the task System.out.println (" seconds later ");} //dispatch task in 60 seconds from now to execute},60, timeunit.seconds); ... //once the dispatch task executes, close scheduledexecutorservice to release the resource Executorservice. shutdown ();           
        

Although the Scheduledexecutorservice API was straightforward, it would have a performance burden under high load.

6. Use EventLoop to dispatch tasks

Scheduledexecutorservice implementations have limitations, such as the fact that, as part of the thread pool management, there will be additional thread creation. This becomes a bottleneck if a large number of tasks are dispatched in a compact manner. Netty the task scheduling through channel EventLoop solves this problem.

After 60 seconds, the runnable instance is executed by the EventLoop assigned to the channel. If you want to schedule a task to execute every 60 seconds, use the Sheduleatfixedrate () method, such as the following code.

Channel ch = ...;        ScheduledFuture<?> future = ch.eventLoop().schedule(                //创建一个Runnable以供调度稍后执行                new Runnable() {                    @Override                    public void run() { //要执行的代码 System.out.println("60 seconds later"); } //调度任务在从现在开始的60秒之后执行 },60,TimeUnit.SECONDS );

As mentioned earlier, Netty's EventLoop expands Scheduledexecutorservice, all of which provide all the methods available to the JDK implementation, including the preceding schedule () and Scheduleatfixedrate () methods. A complete list of all operations can be found in the Scheduledexecutorservice Javadoc.

To cancel or check (scheduled tasks) execution state, you can use the Scheduledfuture returned by each asynchronous operation.

The following code shows a simple cancel operation.

...;        //调度任务,并获得所返回的ScheduledFuture        ScheduledFuture<?> future = ch.eventLoop().scheduleAtFixedRate(...); //some other code that runs... boolean mayInterruptIfRunning = false; //取消该任务,防止它再次运行 future.cancel(mayInterruptIfRunning);

7. Thread Management

The excellent performance of the Netty threading model depends on determining the identity of the currently executing thread, that is, determining whether it is assigned to the current channel and its eventloop.

If the calling thread is the thread that supports EventLoop, the committed code block will be executed. Otherwise, EventLoop dispatches the task for execution at a later time and puts it into the internal queue. When EventLoop next processes its event, it executes those tasks/events in the queue. This is also how any thread interacts directly with the channel without additional synchronization in the Channelhandler.

Note that each eventloop has its own task queue, independent of any other eventloop. Shows the execution logic used by EventLoop for scheduling tasks. This is also a key component of the Netty threading model. The importance of not blocking current I/O threads has been clarified before. We reiterate again in another way: "Never put a long-running task into the execution queue because it will block any other tasks that need to be performed on the same thread." "If you have to make blocking calls or perform long-running tasks, we recommend using a dedicated eventexecutor."

In addition to this limited scenario, as with the different event-handling implementations of the transport, the threading model used can strongly affect the overall system performance of queued tasks.

8. EventLoop thread Assignment--Asynchronous transfer

Asynchronous transport implementations use only a small amount of eventloop, and in the current threading model, they may be shared by multiple channel streams, which allows a large number of channel to be supported with as little thread as possible, rather than assigning a thread to each channel.

Shows a eventloopgroup that has 3 fixed-size eventloop (each eventloop is supported by a thread). EventLoop (and the thread that supports them) are assigned directly when the Eventloopgroup is created to ensure that they are available when needed. Eventloopgroup is responsible for assigning a eventloop to each newly created channel. In the current implementation, the sequential loop (Round-robin) is used to allocate to obtain a balanced distribution, and the same eventloop may be assigned to multiple channel.

Once a channel is assigned to a eventloop, it will be used throughout its life cycle for this eventloop, note that because it frees you from worrying about thread safety and synchronization issues in your Channelhandler implementation.

In addition, it is important to note that the distribution of eventloop affects the use of threadlocal. Because a eventloop is usually used to support multiple channel, the threadlocal will be the same for all associated channel. This makes it a bad choice for features such as state tracking. However, in some stateless contexts, it can still be used to share some heavy or costly objects, even events, between multiple channel.

9. Blocking transmission

The design for other transports such as OIO (old blocking I/O) is slightly different, as shown in. Each channel here will be assigned to a eventloop (and his thread). If you develop an application that uses blocking I/O implementations in a java.io package, you may have encountered this model.

However, as before, it is ensured that the I/O events for each channel are handled only by a thread (the thread that supports the channel's EventLoop). This is another example of netty design consistency.

Netty Combat seven EventLoop and threading model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.