Netty in Action (17) seventh chapter EventLoop and threading Model

Source: Internet
Author: User

This section includes:

1) Threading Model Overview

2) Event loop concept and specific implementation

3) Task Scheduling

4) Implementation Details



To put it simply, the threading model is a critical part of an operating system, programming language, framework, or application, and at what time the creation of a thread will have a significant impact on your code execution, so for developers, Understanding the pros and cons of a variety of threading models is an important thing to do is to use the threading model itself or the threading framework provided by some framework or language for developers to choose from.


In this section, we will explain in detail the threading model of Netty, which is very powerful and easy to use, and is committed to simplifying your application code, high performance, high availability, as well as the design principles of Netty, and we will explain some of our experience in choosing this model.


If you have some good understanding of Java Concurrency API or experience, you will find this content is very easy to understand, if you have some concept of concurrency is a small white words, perhaps you need to learn to write about this knowledge, Brian Goetz, et al wrote the Java Concurrency in practice "This book is a good choice.


7.1 Threading Model Overview


In this section, we will introduce the threading model in general, and then discuss Netty's past and present threading models, and review the respective advantages and limitations of the two models


In our previous section, it was pointed out that a thread would specify exactly when the code would be executed, because we had to guard against some of the negative effects of concurrent thread execution, and it was important to understand the specific meaning of a model, and ignoring those details to get the maximum performance and benefit would be an illusion. A cruel reality will defeat you.


Because as the computer's multicore or multi-CPU becomes more and more common, many applications now use more sophisticated multithreading techniques to fully utilize computer resources, in contrast to the earlier era of Java, we go to create a thread and then use this thread to run a unit of work to meet our needs, This low-level model, if run under high load, will have a poor performance experience, starting with the introduction of the Excutor API from JDK1.5, whose threading pool technology greatly improves the performance of the application by caching and reusing threads


One of the most basic thread pool models is this:

1) Get an idle thread from the thread and use this thread to execute a method that implements the Runnable or callable interface

2) When this task is completed, the thread will be back in the thread for reuse


This mode 7.1 shows:

Pooling makes the thread reusable can reduce the performance loss of creating thread-destroying threads for each task, but it does not eliminate the loss of context switching, which can become very noticeable when the number of threads spikes and becomes more severe in the case of high loads, and Some other issues related to the threading model are also reflected in the application, because there are quite complex business logic and fairly high concurrency requirements in your references


In short, multithreading is complex, and in the next section we will see how Netty solves this problem.


7.2 Interface EventLoop


For any network framework, in the entire life cycle of a network connection, it is a basic function to perform the task to handle the event when there is an event, and the corresponding programming structure is generally referred to as thread loop. This term is implemented by the Netty Io.netty.channel.EventLoop interface


The basic idea of an event loop is shown in the following code listing, where each task is an instance of implementing the Runnable interface


Netty's EventLoop is a design that integrates the two basic APIs of concurrency and networking and then works together, first, Netty's io.netty.util.concurrent package is built on top of Java.util.concurrent's basic Java package to provide thread executor, and the second, packet Io.netty.channel inherits these to get the channel event, the specific inheritance off The system diagram is shown in 7.2


In this model, a eventloop is completely provided by a thread and does not change, a task that implements the Runnable or callable interface can be submitted directly to the EventLoop to implement a real-time or timed task, depending on your configuration and the number of cores available, Multiple EventLoop can be created to optimize the use of resources, and a single eventloop can serve multiple channel


Note that Netty's EventLoop inherits Scheduledexecutorservice, and defines only a parent () method, which in the following code snippet The purpose of this method is to return a reference to the Eventloopgroup that the current EventLoop instance belongs to


Event/task Execution order event and task execution sequence are FIFO, which eliminates the possibility of data confusion by guaranteeing that the byte contents can be transmitted in the correct order


7.2.1 I/O and event handling in Netty 4


In the sixth chapter, we have explained in detail: by loading one or more Channelhandler channelpipeline I/O operation is triggered an event, in the event of propagation, will be intercepted by a channelhandler, Then be treated as required events


In general, an event is often a confirmation of how it is handled, it may transmit data over the network to your application, perhaps the data is transferred from your application to the network, but the event processing logic must be universal and flexible enough to be reused as much as possible, so All operations or events in Netty4 are handled by threads that have been assigned to EventLoop.


This model is not the same as the threading model of Netty3, and in the next section we will explain the threading model of the early Netty3 and analyze why this model has been superseded


7.2.2 I/O operations in Netty 3


In the Netty3 version of the threading model, only the input events can be handled according to the NETTY4 model, all the output models are handled by the corresponding calling thread, perhaps the I/O thread may be other threads, and this looks good, But when you think about the synchronization problem you will find that this model is problematic, in short, it is impossible to ensure that in the multi-threaded environment, the same time can not try to output data, for example, you in different threads by invoking the channel's write method will simultaneously trigger the output event


There are some bad effects when an output event is a result of an input event, such as an exception to the write method of the channel, you need to generate an exception capture event, but in Netty3, because this is an input event, you need to execute your code in the calling thread, This processing event is given to the I/O thread for execution and requires additional thread context switching, as well as code loss


The threading model used by Netty4 follows these questions by using the same thread to handle all events that occur in a given eventloop, which provides a simple execution architecture and eliminates synchronization problems between Channelhandler


If you have understood the rules of eventloop, then let's see how the task is executed on a regular basis.


7.3 Task Scheduling


Sometimes, you need to do a timed task, this task can be executed at a later time or periodically, for example, you want to register a task first, and then when the client and server connection is triggered after 5 minutes of execution, A common user use case is to send a heartbeat message to the remote side to see if the current connection is still alive, and if not, you should know that you can close the channel.


In the next section, we'll show you how to use Java APIs and Netty APIs to perform timed tasks, and then explain the internal implementation of Netty, and talk about the advantages and limitations of this design.


7.3.1 JDK Scheduling API


Prior to JDK1.5, the timed task was built on Java.util.Timer, which used a background threading pattern to accomplish the task, which had the same limitations as the standard thread, the backend, the JDK provided the Java.util.concurrent package, and the package defined the SCHEDULEDEXECU Torservice Interface, table 7.1 shows the Java.util.concurrent.Excutors-related factory methods

While not giving too many parameter choices, which is very efficient for most use cases, the following code listing shows you how to use Scheduledexecutorservice to perform a task after a 60-second delay

Although the Scheduledexecutorservice API is straightforward and straightforward, it can improve the performance experience in a very good load situation, and in the next section we'll show you how Netty can provide the same functionality more effectively.


7.3.2 Scheduling tasks using EventLoop

The implementation of Scheduledexecutorservice is also limited, such as the creation of additional threads for pool management, which can be a bottleneck if a large number of scheduled tasks come in. Netty uses the channel's eventloop to implement this timed task, as shown in the following code listing:

When 60 seconds pass, this runnable instance will be assigned to the channel EventLoop execution, if you want to perform this scheduled task every 60s, use the Scheduleatfixedrate method, as shown below:

As we have said before, Netty's EventLoop inherits Scheduledexecutorservice, so it can provide native methods of the JDK, including schedule and scheduleatfixedrate Methods, both of which were used in our previous code listings, and the complete code list can be found in the official Java Scheduledexecutorservice documentation


If you want to cancel or detect the state of the execution, use the return of each asynchronous operation Scheduledfuture, the following code listing shows you a simple cancel operation

These examples show you some of the high-performance requirements that can be achieved through some of the advanced APIs of Netty, which are actually dependent on the implementation of the underlying threading model, and the next section will discuss these threading models in detail.


7.4 Implementation Details


In this section, we will explain in more detail the principles of Netty threading Model and timing task implementation, as well as some flaws in this model.


7.4.1 Thread Management

The high performance of the Netty threading model depends on determining the identity of the current thread, in other words, determining whether the current execution thread is a thread assigned to the current channel and EventLoop. The same eventloop needs to handle all the events for the channel assigned to him throughout its life cycle.


If the calling thread is the assigned EventLoop, then the code block will be executed, otherwise, EventLoop will postpone the task by putting the task into the internal queue, and when eventloop the next event to process it, it will execute the task in the queue. This explains why any thread can interact with each other and does not need to synchronize in multiple Channelhandler


Note that each eventloop will have its own task queue, independent of any other eventloop, and figure 7.3 shows the use of eventloop execution logic, which is a critical part of the Netty threading model


We have previously stated the importance of not blocking the current I/O thread, and here we have another way to state it again: never put a long-running task in the execution queue because it will block the execution of other tasks in the same thread, if you must invoke the task of blocking the task or long execution time, We recommend that you use a specific eventexecutor

Setting aside such a limitation, the task queue in the current Netty threading model will greatly affect the overall performance of the system, but can be used for event handling


7.4.2 Eventloop/thread Allocation


The EventLoop in the Eventloopgroup container can be used for I/O and event services in the channel, depending on your transport type, EventLoop in Eventloopgroup will also be created and assigned the corresponding type.


ASynchronous Transports


Asynchronous implementations use only a small amount of eventloop, in which a eventloop may be shared by multiple channel, which can function as multiple channel threads under as few as possible, rather than assigning a thread to each channel


Figure 7.4 shows a eventloopgroup with a fixed number of eventloop, and when Eventloopgroup is created, EventLoop is assigned directly to be used when there is a need.

Eventloopgroup is responsible for allocating a eventloop for each new created channel, in the current implementation, using polling to get the most balanced distribution, the same eventloop can be assigned to multiple channel



Once a channel is assigned to a EventLoop, the channel will only use this one eventloop throughout its life cycle, bearing in mind this feature, Because this feature allows you to get out of the problem of worrying about thread safety and syncing your Channelhandler


Also, you should be aware that the distribution of eventloop will also affect the use of threadlocal, because a eventloop will be used for more than one channel, so threadlocal will be the same for all associated channel, This is a bad choice for implementing the status tracking feature, but a stateless context is useful for sharing some expensive objects, such as event.


BLOCKING Transports

For other blocking transport types such as OIO, this design is a little different, and figure 7,5 illustrates this different

Here a eventloop is assigned to a channel, and if you've ever used a java.io package to develop a blocking I/O application, you may have encountered this design


However, as with the previous design, this also ensures that all I/O events for a channel can be handled only by threads on the same eventloop, which is another example of Netty conformance design, which is also an excellent contribution to netty stability and ease of use.


7.5 Summary


In this chapter, we describe the model of threading model and Netty, and we explain its performance and consistency.


You have learned that using EventLoop to perform your task is the same as performing with your framework, and you learned how to schedule tasks to postpone execution, scalability problems in high load situations, how to confirm whether a task was executed, and how to cancel the task.


This part of the knowledge complements our learning of the implementation details of the Netty framework, which will help us to maximize the performance of your application while simplifying our code, and we recommend you Brian Goetz for details on thread pooling and concurrent programming.
Java Concurrency in practice, his book gives you a deeper understanding


Now that we're at the most exciting moment, the next chapter, we'll discuss bootstrapping, the process of configuring and connecting the various components of Netty is handled by it, and it will bring you new life




Netty in Action (17) seventh chapter EventLoop and threading Model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.