Netty: A non-blocking Client/Server framework.
Netty: A non-blocking Client/Server framework
Author: chszs, reprinted with note. Blog homepage: http://blog.csdn.net/chszs
Netty is an asynchronous event-driven network application framework that brings new vigor to Java Network application development. Netty is composed of protocol servers and clients and can be used to quickly develop maintainable high-performance software. The Netty application framework and tools simplify network programming and are maintained by the Netty community.
Netty is also classified as NIO Client/Server framework, which can be used to quickly and easily develop network applications and simplify and make network programming for TCP and UDP socket servers more reasonable.
The built-in HTTP protocol supports WebSocket and allows the framework to run in the Servlet container. The new version of Netty also supports non-blocking I/O and blocking I/O communication.
Features of Netty:
1. transmission services include sockets and datagram, HTTP channels, and virtual machine internal pipelines.
2. The Protocol supports the following extensions: HTTP, Web Socket, Google Protocol Buffer, SSL-StartTLS, large file transmission, RTSP, Zlib or gzip compression, binary Protocol, and other legacy text formats.
3. Core: scalable event model, Unified Communication API, and zero-copy capability rich byte Buffer
Netty design:
Netty is designed for multiple transmission types and integrates a set of unified APIs, blocking and non-blocking sockets. Netty's event model is scalable and can clearly isolate concerns. Netty's thread model provides the flexibility to choose between a single thread or a thread pool like SEDA, And the thread is highly customizable, the support for datagram enables truly connectionless communication. The combination of Netty's pipeline abstraction with secure threads and dynamic variability makes the Framework strongly supported.
Note: SEDA, namely Staged Event Driven Architecture, is a Staged Event-Driven Architecture. The idea of SEDA is to separate the tasks originally completed by one thread into multiple relatively independent stages. Each stage is executed by a dedicated set of threads, and queue interaction is used between stages. Using the SEDA method can reflect the value only when the concurrency increases to a certain extent and concurrency becomes a System Bottleneck. For a single operation, the latency of the queue must increase.
For more information, see SEDA: an Architecture for Well-Conditioned, Scalable Internet Services.
SEDA is an excellent high-performance internet server architecture model studied by UC Berkeley. Its design goal is to support large-scale concurrent processing, simplified system development, support for processing monitoring, and support for system resource management.
Two widely used network server architecture models:
1) Threaded Server)
Working principle: for each request, dispatcher creates and allocates a thread for it, which is responsible for processing the request. This method is also named (Thread-per-request ).
Advantage: the execution granularity is the complete processing process. The processing logic is clear and easy to develop.
Disadvantage: As requests are processed, too many threads are executed concurrently. Too many threads will cause the system to overhead thread scheduling and resource contention, resulting in a sharp decline in system performance and a decline in system processing capabilities.
Improvement Measures: Introduce the Bounded Thread Pools)
The system can only create a certain number of threads. When all threads are saturated, new processing requests can only wait or be discarded.
Disadvantage: The execution granularity is still a complete processing process. It is difficult to detect the root cause of the system performance bottleneck and adjust it accordingly.
2) Event-Driven concurrent processing (Event-Driven Concurrency)
The processing process is divided into multiple steps, and each step is implemented as a finite state machine (FSM ).
Working principle: All requests are sent to the system as events, and Scheduler is responsible for transmitting the requests to the FSM. The FSM processing result is also output to Scheduler in the form of an Event. The new Event will be forwarded to the next FSM by Scheduler again until the processing is completed.
Advantages:
1. As the processing capacity increases, the system load increases linearly. When the system's saturation processing capability is reached, the system's processing capability will not decrease.
2. As each processing step is implemented independently, it is easy to monitor and adjust the system.
Disadvantages:
The Design and Implementation of Scheduler are too complex. Different implementations are required for logical changes of different applications and systems.
SEDA Architecture
(Similar to Event-Driven Concurrency, but no schedcy), each processing step is independent of a Stage.
Stage Structure:
1) An Event Queue that accepts input;
2) Event Handler compiled by an application developer;
3) a Controller is used to control the execution process. Including the number of concurrent threads and the number of batch processing;
4) A Thread Pool is used for concurrent processing;
Stage input is obtained through Event Queue. Stage output is pushed to the Event Queue of other stages as an Event. The connection relationship between stages is specified by application developers.
Problem: Although Event Queue reduces the coupling between modules, it reduces the response speed.
Performance and effectiveness:
Netty not only provides good stability, but also provides better throughput and lower latency performance, limiting memory replication to the lowest requirement, zero copy capability rich byte buffer feature enables the kernel to manage DMA replication. This reduces the burden on the CPU and system bus, and improves the effectiveness of the framework.
Scalability and integration:
Netty is scalable and supports expansion to thousands of connection types without performance bottlenecks while maintaining the effectiveness. These connections are highly reliable and will not be invalidated. Netty is easy to expand and build. Netty also provides flexible integration performance and can be integrated with many environments, such as Linux, Java, C #, C ++, and Python.
Security: Netty provides complete SSL/TLS and StartTLS support.
Netty provides many official guides, documents, and JavaDoc and examples for developers.
The latest stable version of Netty is 4.0.23.
: Http://dl.bintray.com/netty/downloads/netty-4.0.23.Final.tar.bz2
Android client + server, Technical Options
Netty is a java open-source framework provided by JBOSS. Netty provides asynchronous, event-driven network application frameworks and tools to quickly develop high-performance, high-reliability network servers and client programs. That is to say, Netty is a NIO-based client and server-side programming framework. Based on socket, Netty is further encapsulated based on various common application protocols to provide more convenient interfaces. If you need to quickly build a C/S service framework, Netty is correct.
In turn, you need to take this course. You should master basic socket programming and communication principles, so it is better to use socket programming directly when learning. Maybe one day, you are inspired to develop a better framework than Netty, a more powerful software.
How to Use netty to write an http persistent connection Server
I read the source code of play again, And messageRecived in the play Custom handler:
Public void messageReceived (ChannelHandlerContext ctx, MessageEvent e)
Throws Exception {
Invoker. invoke (new NettyInvokation (request, response, ctx, nettyRequest, e);} After messageReceived is over, the worker thread is about to come soon (some system scheduled final work should be done) leave this request, that is, this request will no longer occupy worker. However
Invoker. invoke this method will submit a task in the thread pool inside the play framework to continue processing the request and complete the real business logic.
The above is the practice of play. To sum up, you can enable a new thread in messageReceived or submit a task to the thread pool to complete the business logic processing part of the request, in this way, not only long connections are not closed, but netty worker processes are not occupied. After processing the business logic, use the callback function to return the result to the client using the HttpResponse provided by netty.
Request-netty master-netty worker-(now worker can process other requests) Start a new thread-netty response
I didn't test it. Let's take a look later.