Netty Version Upgrade thread chapter of the History of Tears

Source: Internet
Author: User
Tags call back message queue session id

1. Background 1.1. Netty 3.X Series version Status

Based on a survey of some users of the Netty community, combined with the use of Netty in other open source projects, we can see that the current mainstream version of Netty commercial is concentrated on 3.X and 4.X, with the Netty 3.X series version being the most widely used.

The Netty community is very active, with the release of the 3.X series from the netty-3.2.4 final version released on February 7, 2011 to the final version of netty-3.10.0 released on December 17, 2014, with a version spanning more than 3 years and a total of 61 final versions.

1.2. Upgrade or stick to the old version

Compared to other open source projects, Netty User's version upgrade road more difficult, the most fundamental reason is that Netty 4 to Netty 3 did not do a good forward compatibility.

Because the version is incompatible, most of the old version of the user's idea is that since the upgrade is so troublesome, I do not need to use the new features to Netty 4, the current version is quite stable, temporarily do not upgrade, see later.

There are many other reasons to stick to the old version, such as considering the stability of the online system, familiarity with the new version, etc. In any event, upgrading Netty is a major event, especially for Netty, which has a direct and strong reliance on products.

As can be seen from the above analysis, adherence to the old version seems to be a good choice, but "the ideal is good, the reality is cruel", adhere to the old version is not always so easy, below we look at the case of forced escalation.

1.3. "Forced" upgrade to Netty 4.X

Most upgrades are "forced" in addition to an active version upgrade to use the new features. Below we analyze the reasons for these upgrades.

    1. The company's open source software management strategy: For those manufacturers, different departments and product lines rely on open source software versions are often different, in order to unified management of open source dependencies, reduce security, maintenance and management costs, often specify the preferred software version. Since the Netty 4.X series version is already very mature, many companies are preferred Netty 4.X version.
    2. Maintenance costs: Whether you rely on Netty 3.X or netty4.x, you often need to customize it on top of the original frame. For example, the client's short-connect, heartbeat detection, flow control, and so on. Two custom frameworks for the Netty 4.X and 3.X versions are available, with very high development and maintenance costs. Depending on the use policy of open source software, when there is a version conflict, you will often choose to upgrade to a higher version. For Netty, this rule is still followed.
    3. New features: The Netty 4.X offers many new features compared to Netty 3.X, such as an optimized memory management pool, support for the MQTT protocol, and more. If users need to use these new features, the easiest way is to upgrade the Netty to the 4.X series version.
    4. Better performance: The Netty 4.X version optimizes the memory pool, reduces the frequency of GC and reduces memory consumption compared to the older version of 3.X, and optimizes the rector thread pool model to make the user's development easier and more efficient.
1.4. Cost of improper upgrade

On the surface, the class library package path modification, API refactoring and so on seems to be the highlight of the upgrade, we often focus on these "Ming gun", but the real hidden and deadly is "stabbed." If you are unfamiliar with the netty underlying event scheduling mechanism and threading model, you will often "hit the gun".

This article takes several typical real cases as examples, through the problem description, the problem localization and the problem summary, lets these hidden "stabbed" no longer injure the person.

Due to the Netty 4 threading model changes caused by a lot of upgrade incidents, limited to space, this article does not one by one enumeration, these problems original aim, as long as the thread model to seize the key point, so-called incurable diseases will be solved.

2. Netty After the upgrade encounter memory leaks 2.1. Problem description

With the development of JVM virtual machine and JIT compilation technology, object allocation and collection is a very lightweight task. But for buffer buffers, the situation is slightly different, especially for the allocation and recycling of out-of-heap direct memory, which is a time-consuming operation. In order to reuse buffers as much as possible, netty4.x provides a buffer reuse mechanism based on the memory pool. The performance test shows that the bytebuf of the memory pool is about 23 times times higher than that of the bytebuf, which is strongly correlated with the performance data.

Business applications are characterized by high concurrency, short processes, and most objects are short life-cycle objects that are being born. To reduce the memory copy, the user expects the object to be encoded directly into the pooledbytebuf at serialization time, so that there is no need to re-request and free memory for each business message.

The relevant code examples for the business are as follows:

Initializes the memory pool allocator in the business thread, allocating non-heap memory bytebufallocator allocator = new Pooledbytebufallocator (true); Bytebuf buffer = Allocator.iobuffer (1024);//construct Order Request message and assign value, business logic omitted subinforeq inforeq = new Subinforeq (); Inforeq.setxxx ( ......);/ /encode the object into Bytebuf codec.encode (buffer, info);//call Channelhandlercontext for message sending Ctx.writeandflush (buffer);

Business code upgrade Netty version and Refactor, after a while, the Java process goes down and the system runs to see that the system has a memory leak (sample stack):

Figure 2-1 oom Memory Overflow stack

Monitor memory (switch to use heap memory pool for easy memory monitoring), and discover that heap memory has soared as shown below (sample heap memory monitoring):

Figure 2-2 Heap Memory monitoring

2.2. Problem locating

Dump the heap memory using the Jmap-dump:format=b,file=netty.bin PID and analyze it with IBM's Heapanalyzer tool to find out that Bytebuf has leaked.

Because the memory pool is used, the first thing to suspect is whether the requested Bytebuf was not released. Look at the code, and after the message has been sent, the netty underlying has called referencecountutil.release (message) to release the memory. What's going on here? Is there a bug in the memory pool of the Netty 4.X, and the release memory fails when the release operation is called?

Considering that the Netty memory pool itself is not likely to be a bug, first analyze the way the business is used:

    1. The allocation of memory is done in the business code, because the use of the business thread pool to do I/O operations and the isolation of business operations, in fact, memory is allocated in the business threads;
    2. The memory release operation is performed in outbound, according to the threading model of Netty 3, downstream (corresponding to Netty 4 Outbound,netty 4 cancellation of upstream and downstream) handler is also performed by the business caller thread, which means that the release is carried out in the same business thread as the assignment.

Initial troubleshooting did not find the root cause of the memory leak, helpless began to view the Netty memory pool allocator Pooledbytebufallocator doc and source implementation, found that the memory pool is actually based on thread context implementation, the relevant code is as follows:

Final threadlocal<poolthreadcache> Threadcache = new threadlocal<poolthreadcache> () {        private final Atomicinteger index = new Atomicinteger ();        @Override        protected Poolthreadcache initialvalue () {            final int idx = Index.getandincrement ();            Final poolarena<byte[]> Heaparena;            Final poolarena<bytebuffer> Directarena;            if (Heaparenas! = null) {                Heaparena = heaparenas[math.abs (idx% heaparenas.length)];            } else {                Heaparena = null ;            }            if (Directarenas! = null) {                Directarena = directarenas[math.abs (idx% directarenas.length)];            } else {                Directarena = null;            }            return new Poolthreadcache (Heaparena, Directarena);        }

This means that the application and release of memory must be in the same thread context and not across threads. It is not the same block of memory that actually operates after a cross-thread, which can cause many serious problems, and memory leaks are one of them. The memory is applied on a thread, switch to the B thread is released, the actual is not recycled correctly.

Through the source analysis of netty memory pool, the problem is basically locked. For the sake of simple verification, by debugging a single business message, it is found that execution is not a business thread, but a netty nioeventloop thread: When a message is successfully sent, The bytebuf that have been sent successfully is released through the Referencecountutil.release (message) method.

After locating the problem, we continue to trace that Netty 4 modifies the threading model of Netty 3: at Netty 3, upstream is executed in the I/O thread, and downstream is executed in the business thread. When Netty reads a datagram from the network to the business handler, handler is executed in the I/O thread, and when we call write in the business thread and Writeandflush send a message to the network, handler is executed in the business thread, The business thread returns until the last header handler writes the message to the send queue.

Netty4 modified this model, in Netty 4 inbound (corresponding to Netty 3 of upstream) and outbound (corresponding to Netty 3 downstream) are executed in nioeventloop (I/O thread). When we send a message through channelhandlercontext.write in a business thread, Netty 4 first encapsulates the message to be sent as a task when dispatching the message event to Channelpipeline. It is then placed in the Nioeventloop task queue and executed asynchronously by the Nioeventloop thread. The scheduling and execution of all subsequent handler, including the sending of messages, notification of I/O events, is handled by the Nioeventloop thread.

Below we understand the differences between the two version threading models by comparing the message receiving and sending processes of Netty 3 and Netty 4:

Netty 3 I/O Event processing flow:

Figure 2-3 Netty 3 I/O Event processing threading model

Netty 4 I/O Message processing flow:

Figure 2-4 Netty 4 I/O Event processing threading model

2.3. Summary of issues

The new memory pool in the Netty 4.X version is really efficient, but if used improperly it can cause a variety of serious problems. Problems such as memory leaks, functional testing is not abnormal, if the interface is not measured or stability testing and directly on-line, it will lead to serious online problems.

Usage recommendations for memory pool POOLEDBYTEBUF:

    1. After the application must remember to release, Netty itself socket read and send the BYTEBUF system will be automatically released, the user does not need to do two release, if the user uses Netty memory pool in the application to do Bytebuf object pool use, you need to voluntarily release;
    2. Avoid the wrong release: cross-thread release, repeat release, etc. are illegal operations, to avoid. In particular, cross-threading application and release, often with concealment, the problem is more difficult to locate;
    3. Prevent implicit application and allocation: A case has occurred before, in order to resolve the memory pool cross-thread request and release problems, there are users to the memory pool two times packaging, in order to achieve multi-threaded operation, memory is always used by the packaging management thread request and release, this can mask the user business threading model and access differences. Who knows to run a period of time after a memory leak, and finally found that the original call Bytebuf write operation, if the memory capacity is insufficient, will automatically expand capacity. The scaling operation is performed by the business thread, bypassing the memory pool management thread, and "reference escaping" occurs. This bug only occurs when the BYTEBUF capacity is dynamically extended, so it hasn't happened for a long time, until one day ... Therefore, you should be careful when using the Netty 4.X memory pool, especially when doing two packages, be sure to have a deep understanding of the implementation details of the memory pool.
3. Netty After the upgrade encountered data tampering 3.1. Problem description

A business product, netty3.x upgrade to 4.X, the system operation process, the service side sent to the client's response data is inexplicable "tamper".

The processing flow of the business service side is as follows:

    1. Encapsulates the decoded business message into a task, which is posted to the backend's business thread pool for execution;
    2. The business thread processes the business logic and, after completion, constructs a reply message to the client;
    3. The encoding of the business response message is implemented by inheriting the Netty codec framework, i.e. encoder Channelhandler;
    4. After calling the Netty message-sending interface, the process continues, and depending on the business scenario, the original sent business object may continue to operate.

Examples of business-related code are:

Constructs the ordering reply message subinforesp Inforesp = new Subinforesp ();//assigns a value to the reply message inforesp.setresultcode (0) According to the business logic; INFORESP.SETXXX () The subsequent assignment operation is omitted ...//call Channelhandlercontext for message sending Ctx.writeandflush (INFORESP);//After the message is sent, the subsequent branch processing according to the business process, Modify Inforesp object Inforesp.setxxx (); Follow-up code omitted ...
3.2. Problem locating

First of all, the answer message was illegally "tampered" reasons for analysis, after locating the discovery when a problem occurs, the "tampered" content is called after the Writeandflush interface, the subsequent business Branch code to modify the answer message caused by. Because the modify operation occurs after the Writeandflush operation, the problem should not occur according to the threading model of Netty 3.X.

In Netty3, downstream is executed in a business thread, which means that the encoding operation for SUBINFORESP is performed in the business thread, and the business thread returns and continues to execute subsequent business logic after the encoded BYTEBUF object is posted to the message send queue. Modifying the answer message at this time does not alter the BYTEBUF object that has been encoded, so there is no question of tampering with the answer message.

The

Preliminary analysis should be the result of a change in the threading model, and then the threading model of Netty 4 was checked, and it did change: when the call outbound sends out a message, Netty encapsulates the sending event into a task. Executed asynchronously in the task queue to Nioeventloop, the relevant code is as follows:

 @Override public void Invokewrite (Channelhandlercontext ctx, Object msg, Channelpromise Promise) {if (msg = =        NULL) {throw new NullPointerException ("MSG");        } validatepromise (CTX, Promise, true);        if (Executor.ineventloop ()) {Invokewritenow (CTX, MSG, promise);            } else {Abstractchannel channel = (Abstractchannel) ctx.channel ();            int size = Channel.estimatorhandle (). Size (msg);                if (Size > 0) {channeloutboundbuffer buffer = Channel.unsafe (). Outboundbuffer ();                    Check for NULL as it could be set to NULL if the channel is closed already if (buffer! = NULL) {                Buffer.incrementpendingoutboundbytes (size);        }} safeexecuteoutbound (Writetask.newinstance (CTX, MSG, size, promise), promise, MSG); }    }

As can be seen from the above code, Netty first to determine the current operation of the thread, if the operation itself is performed by the Nioeventloop thread, the write operation is called, otherwise, a thread-safe write operation, the write event is encapsulated as a task, put into the task queue by Netty i/ o Thread execution, business call back, process continues execution.

Through source analysis, the root of the problem is clear: after the system upgrade to Netty 4, the threading model changes, the encoding of the response message is executed asynchronously by the Nioeventloop thread, and the business thread returns. There are two possible ways to do this:

    1. If the encoding operation is performed prior to modifying the business logic of the reply message, the result is correct;
    2. If the encoding operation executes after modifying the business logic of the reply message, the result is run incorrectly.

Because the execution sequence of threads is unpredictable, the problem is hidden quite deeply. If you do not understand the threading model for Netty 4 and Netty3, you will fall into a trap.

Netty 3 version of the business logic is no problem, the process is as follows:

Figure 3-1 Business process threading model prior to upgrade

After upgrading to the Netty 4 version, the business process changed due to changes in the Netty threading model, causing problems with the business logic:

Figure 3-2 The business process process changes after the upgrade

3.3. Summary of issues

Many readers have only focused on changes to package paths, classes, and APIs as they proceed with the Netty version upgrade, without noticing the "stabbed"-threading model changes hidden behind them.

Users upgrading to Netty 4 need to evaluate an existing system based on the new threading model, focusing on the outbound Channelhandler, and if its correctness relies on the Netty 3 threading model, it is likely to have problems in the new threading model. May be a functional problem or other problem.

4. Netty After the upgrade, the performance decreased significantly by 4.1. Problem description

I believe many Netty users have read the following related reports:

The Twitter,netty 4 GC overhead is reduced to one-fifth: Netty 3 uses Java objects to represent I/O events, which is simple, but generates a lot of garbage, especially at our scale. Netty 4 Changes this in the new version to replace the short-lived event object, and to handle I/O events in a way that defines a channel object that is long on the life cycle. It also has a dedicated buffer allocator that uses the pool.

Netty 3 Creates a new heap buffer whenever new information is received or the user sends a message to the remote end. This means that, for each new buffer, there will be a ' new byte[capacity] '. These buffers cause GC pressure and consume memory bandwidth: For security reasons, a new byte array is allocated with 0 padding, which consumes memory bandwidth. However, an array filled with 0 is likely to be populated again with the actual data, which consumes the same memory bandwidth. If the Java Virtual Machine (JVM) provides a way to create a new byte array without a 0 padding, we could have reduced memory bandwidth consumption by 50%, but there is no such a way.

In Netty 4, code defines a finer-grained API to handle different event types rather than creating event objects. It also implements a new buffer pool, which is a pure Java version of Jemalloc (Facebook is also used). Now, Netty no longer wastes memory bandwidth because it fills the buffer with 0.

We compared two of the ECHO protocol servers based on Netty 3 and 4 respectively. (Echo is very simple, so that any garbage generation is netty, not the reason for the agreement). I make them serve the same distributed ECHO Protocol client, with 16,384 concurrent connections from these clients repeatedly sending 256 bytes of random load, almost saturating the gigabit Ethernet.

According to the test results, Netty 4:

    • The GC interrupt frequency is the original 1/5:45.5 vs. 9.2 times/min
    • Garbage generation Speed is the original 1/5:207.11 vs 41.81 mib/seconds

It is the related Netty 4 performance improvement report that many users have chosen to upgrade. Afterwards some user feedback Netty 4 did not bring the expected performance improvement with the product, some even have a very serious performance decline, below we take a business product failure upgrade experience as a case, detailed analysis of the cause of performance degradation.

4.2. Problem locating

First, the performance hotspot is analyzed by JMC and other performance analysis tools, such as the following (information security and other reasons, only the analysis process example):

Figure 4-1 JMC performance Monitoring and analysis

Through the analysis of hot spot method, it is found that there are two hotspots in the message sending process:

    1. Message Delivery performance statistics related handler;
    2. Encode handler.

For performance comparison testing of business products using the Netty 3 version, it is found that the two handler are also hot methods. Since all are hot spots, why did the performance drop so badly after switching to Netty4?

Two versions of the difference were found by the method's Call Tree analysis: In Netty 3, both hot-spot methods were executed by the business thread, whereas in Netty 4, the Nioeventloop (I/O) thread was executed. For a link, the business is a thread pool with multiple threads, and Nioeventloop has only one, so execution is less efficient and the response time returned to the client is large. After Shizhan, the system concurrency decreases and performance decreases.

After identifying the root cause of the problem, the threading model for Netty 4 specifically optimizes the business, performance is expected, and far exceeds the performance of the Netty 3 older version.

The Netty 3 business thread scheduling model diagram looks like this: taking advantage of the business multithreading parallel coding and handler processing, we can process n business messages within a period t.

Figure 4-2 Netty 3 service scheduling performance model

After switching to Netty 4, the business time-consuming handler is executed serially by the I/O thread, resulting in a significant decrease in performance:

Figure 4-3 Netty 4 service scheduling performance model

4.3. Summary of issues

The root cause of this problem is due to the Netty 4 threading model change, after the threading model changes, not only affect the function of the business, even the performance will have a great impact.

The upgrade of Netty needs to be considered from many angles, such as function, compatibility and performance, and must not only stare at the API to change the sesame, but lose the performance of this watermelon. API changes can lead to compilation errors, but performance degradation is hidden in the invisible, a little attention will be in the strokes.

For Internet applications that pay attention to fast delivery, agile development, and grayscale publishing, you should be more careful when upgrading.

5. Context lost after Netty upgrade 5.1. Problem description

To improve the business's two customization capabilities and reduce the intrusion on the interface, the business uses thread variables for message context delivery. For example, the message sends the source address information, the message ID, the session ID, and so on.

The business uses a number of third-party open-source containers, and also provides the ability to thread-level variable contexts. The business obtains system variable information for third-party containers through the container context.

After upgrading to Netty 4, a null pointer exception occurred in the Channelhandler of the business inheritance from Netty, regardless of whether the business custom thread context, or the thread context of a third-party container, gets the passed variable value.

5.2. Problem locating

First check the code to see if the business passed the relevant variables, confirm that the business is passed after the suspicion of Netty version upgrade related, debugging found that the business Channelhandler get the thread context object and the previous business delivery context is not the same. This means that the thread executing the Channelhandler is not the same thread as the thread that is handling the business!

Looking at the relevant Doc discovery of the Netty 4 threading model, Netty modifies the threading model of outbound, which affects the thread context passing when the business message is sent, and eventually causes the thread variable to be lost.

5.3. Summary of issues

There are several common threading models for business:

    1. The business custom thread pool/thread group handles the business, for example using the Executorservice provided with JDK 1.5;
    2. Using the threading model that comes with the Java EE Web container, common HTTP access threads such as JBoss and Tomcat;
    3. Implicitly using threading models of other third-party frameworks, such as protocol processing using the NIO framework, the business code implicitly uses the threading model of the NIO framework, unless the business explicitly implements a custom threading model.

In practice we find that many businesses use third-party frameworks, but are only familiar with APIs and features, and are not clear about threading models. A class library is called by which thread, confused. In order to facilitate variable passing and random use of thread variables, the threading model behind the third-party class library is actually strongly dependent. If the threading model changes after the container or third-party class library is upgraded, the legacy functionality will be problematic.

For this reason, in practice, try not to rely strongly on the threading model of a third-party class library, and if it is unavoidable, you must have a deep and clear understanding of its threading model. When a third-party class library is upgraded, it is necessary to check whether the threading model has changed, and if so, the associated code should also consider synchronizing the upgrade.

6. netty3.x VS netty4.x Threading Model

By analyzing and summarizing three typical upgrade failure cases, we find that there is a common denominator: threading model change is a curse!

In the following section we will detail the Netty3 and Netty4 versions of the I/O threading model to make it easy for you to master the difference between the two, in the upgrade and use as little as possible to step thunder.

6.1 Netty 3.X version threading model

The Netty 3.X I/O operation threading Model is more complex, and its processing model consists of two parts:

    1. Inbound: mainly includes link establishment event, link activation event, read event, I/O exception event, link Shutdown event, etc.
    2. Outbound: Mainly include write events, connection events, listen for binding events, refresh events, and so on.

We first analyze the threading model for the inbound operation:

Figure 6-1 Netty 3 Inbound operation threading Model

As you can see, the main processing flow for the inbound operation is as follows:

    1. The I/O thread (work thread) reads the message from the TCP buffer into the Socketchannel receive buffer;
    2. The I/O thread is responsible for generating the corresponding events, triggering events to be executed up and dispatched to Channelpipeline;
    3. I/O thread scheduling executes the corresponding method of the Handler chain in Channelpipeline until the last Handler of the business implementation;
    4. Last handler encapsulates the message into a runnable, executes in a pool of business threads, returns the I/O thread, and resumes I/O operations such as read/write;
    5. The Business thread pool POPs messages from the task queue and executes the business logic concurrently.

By analyzing the inbound operation of Netty 3 we can see that the handler of inbound is performed by the Netty I/O work thread.

Here we continue to analyze the threading model of the outbound operation:

Figure 6-2 Netty 3 Outbound operation threading Model

As you can see, the main processing flow for the outbound operation is as follows:

A business thread initiates a channel write operation to send a message;

    1. The Netty encapsulates the write operation into a write event, triggering the event to propagate downward;
    2. The Write event is dispatched to the Channelpipeline, and the channel Handler of the downstream event is supported by the business thread in accordance with the Handler chain serial call;
    3. Execute to the last Channelhandler of the system, push the encoded message to the send queue, and the business thread returns;
    4. The Netty I/O thread takes the message out of the sending message queue and calls Socketchannel's Write method to send the message.
6.2 Netty 4.X Version threading model

Compared to the Netty 3.X series version, the Netty 4.X I/O operation threading Model is relatively simple, and its schematic diagram is as follows:

Figure 6-3 Netty 4 Inbound and outbound operation threading Model

As you can see, the main processing flow for the outbound operation is as follows:

    1. The I/O thread Nioeventloop reads the datagram from the Socketchannel and posts Bytebuf to Channelpipeline, triggering the Channelread event;
    2. The I/O thread Nioeventloop calls the Channelhandler chain until the message is posted to the business thread, and then the I/O thread returns, continuing with subsequent read and write operations;
    3. The business thread calls the Channelhandlercontext.write (Object msg) method to send the message;
    4. In the case of a write operation initiated by a business thread, Channelhandlerinvoker encapsulates the sending message into a task queue, which is placed in the I/O thread nioeventloop, and is uniformly dispatched and executed by Nioeventloop in the loop. After the task queue is placed, the business thread returns;
    5. The I/O thread Nioeventloop calls the Channelhandler chain, sends the message, processes the outbound event until the message is placed in the Send queue, and then wakes the selector, which then performs the write operation.

Through the process analysis, we find that Netty 4 modifies the threading model, whether inbound or outbound operations, which is performed by the I/O thread nioeventloop scheduling.

6.3. Threading Model Comparison

Before the new and old version of the threading model PK, the first thing is to familiarize yourself with the concept of serialization design:

We know that when the system is running, there is additional performance loss if the thread context switches frequently. Multi-threaded concurrent execution of a business process, business developers also need to be constantly on the thread security vigilance, which data can be modified concurrently, how to protect? This not only reduces the development efficiency, but also brings additional performance loss.

In order to solve the above problem, Netty 4 adopts the serialization design concept, from the message reading, encoding and subsequent handler execution, always by the I/O thread nioeventloop responsible, it is unexpectedly the entire process does not switch thread context, The data also does not face the risk of being modified concurrently, and for the user it is not even necessary to know the thread details of the Netty, which is really a very good design concept, and it works as follows:

Figure 6-4 Netty 4 serialization design concept

A nioeventloop aggregates a multiplexer selector, so it can handle hundreds or thousands of client connections, and Netty's processing strategy is to have a new client access whenever The Nioeventloop thread group in order to obtain an available nioeventloop, when the upper bound of the array to return to 0, in this way, you can basically guarantee the load balance of each nioeventloop. A client connection is registered to only one nioeventloop, which avoids multiple I/O threads concurrently manipulating it.

Netty reduces the user's development difficulty and improves processing performance through the serialization design concept. The use of thread groups enables multiple serialization threads to be executed horizontally in parallel, with no intersection between threads, which can take advantage of the multi-core boost parallel processing capability while avoiding the additional performance loss of thread context switching and concurrency protection.

After understanding the serialization design concept of Netty 4, we continue to look at the problems of the Netty 3 threading model, summarizing its main problems as follows:

    1. Inbound and outbound are all I/O related operations, their threading model is not uniform, which brings more learning and use cost to users;
    2. Outbound operations are performed by business threads, and typically the business uses the thread pool to process business messages in parallel, which means that at some point there will be multiple business threads operating concurrently with Channelhandler, and we need to protect the channelhandler concurrently, often with locks. If the scope of the synchronization block is inappropriate, it can lead to serious performance bottlenecks, which is very demanding for developers and reduces development efficiency.
    3. Outbound operation, such as the message encoding exception, will produce exception, it will be converted to inbound exception and notify Channelpipeline, which means the business thread initiated inbound operation! It breaks the model that the inbound operation is operated by the I/O thread, and the thread concurrency access security issue occurs if the developer is designed according to the constraints that the inbound operation will only be executed by an I/O thread. The error is very subtle because the scene only happens when there is a specific exception! Once this type of thread concurrency problem occurs in a production environment, location difficulty and cost are very large.

Speaking so much, it seems Netty 4 Netty the threading model of 3, in fact, not really. The performance of Netty 3 may be higher in a given scenario, as described in the 4th section of this article, where coding and other outbound operations are time-consuming and executed concurrently by multiple business threads, with performance definitely higher than a single nioeventloop thread.

However, this performance advantage is not irreversible, if we modify the business code, will be time-consuming handler operation front, outbound operation does not do complex business logic processing, performance also does not lose in Netty 3, but consider the memory pool optimization, do not repeatedly create the event, There is no need to optimize for Netty 4, such as handler locking, and the overall performance Netty 4 version will certainly be higher.

In a word, if the user is really familiar with and mastered the Netty 4 threading model and function class Library, I believe that not only the development will be more simple, performance will be more excellent!

6.4. Thinking

As far as Netty is concerned, mastering the threading model is as important as familiarity with its APIs and capabilities. Many times I encountered the function, performance and other problems, because of the lack of its threading model and principle of understanding caused by, the result we baseless assertion, think Netty 4 version than 3 easy to use and so on.

Not to say that all open source software version upgrade must be better than the old version, as far as Netty, I think Netty 4 version compared to the old Netty 3, is indeed a great progress in history.

Netty Version Upgrade thread chapter of the History of Tears

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.