Linux IO Model and Java network programming __python

Source: Internet
Author: User
Tags connection pooling epoll
first, the network programming socket API operation meaning
Api Blocking Non-blocking
Connect TCP three times handshake succeeds and returns. Returns immediately, and other ways are needed to determine whether a TCP connection has been established successfully or failed.
Send Block until the pending data is sent from the user space into the kernel send buffer and returned. Returns immediately, regardless of whether the data being sent is successfully placed in the kernel send buffer.
Recv Blocks until the data reaches the kernel receive buffer and returns the data from the kernel to user space. Returns immediately, regardless of whether any data arrives in the kernel receive buffer.
Close Slightly Slightly
Second, IO Model division 1, Linux IO Model DivisionThe Linux IO model can be divided into: "Synchronous blocking IO", "Synchronous non-blocking io", "io multiplexing", "signal-driven IO" and "Asynchronous IO". The POSIX standard can be divided into: "Synchronous IO" and "Asynchronous IO".
IO operation is divided into two steps: "Initiate IO Request" and "actual IO operation".
Block/non-blocking: the "initiate IO request" procedure for IO operations blocks the request thread.
Synchronous/Asynchronous: the "actual IO Read and write" procedure for IO operations blocks the request thread. Difference Explanation: 1 synchronous/asynchronous attention is mechanism of message notification, while blocking/non-blocking is concerned with programs (threads) status when waiting for message notification
2 in the case of synchronization, it is up to the processing of the message to wait for the message to be triggered, and in the case of the trigger mechanism to notify the processing of the message, 3 the blocking and non-blocking are mainly the state angle of the program (thread) waiting for the message notification. A blocking call is when the call result returns, the current thread is suspended, waits for a message to be notified that no other business can be performed, and the function returns only after the result is obtained. Non-blocking means that the function does not block the current thread and returns immediately before the result is immediately available.
IO model whether to block Whether to sync
Blocking IO Blocking Synchronous
Synchronous non-blocking IO Non-blocking Synchronous
IO multiplexing Blocking Synchronous
Aio Non-blocking Asynchronous
2. Synchronous blocking IO (blocking IO)
Feature: User threads are blocked in the two steps performed by the IO operation.
Advantages: Data transmission in time, no delay in obtaining information. Disadvantage: User thread blocking wait, low performance. 3. Synchronous non-blocking io (non-blocking io)
Features: 1 After the user thread initiates the IO operation request, if IO condition is not satisfied then receives the failure response immediately, the user thread will not be blocked. 2 user threads need to actively initiate IO operation requests to ask if the kernel current IO condition is satisfied. 3 The actual IO operation in the process of copying data to user space, the user thread is still blocked. Disadvantages: 1 User threads need to constantly poll IO conditions to be satisfied, wasting system resources. 2 The response time of IO task is increased, and the overall throughput of the system is reduced. 4. Io multiplexing (IO multiplexing)
Feature: 1 The Select/poll/epoll call also blocks the user thread, but the user thread is blocked by the Select/poll/epoll function rather than being blocked on the real IO system call. 2 The io,select/poll/epoll that a single thread can handle multiple network connections at the same time will continually poll all the sockets that are listening, and notify the user of the thread when an event is ready for the socket. 3 When multiple client access requests need to be processed simultaneously, multithreading or IO multiplexing technology can be used for processing. One of the biggest advantages of IO multiplexing is the low overhead (the system can handle multiple client requests simultaneously in a single thread). 5. Asynchronous non-blocking io (asynchronout io)
Feature: 1 user threads are not blocked in the two steps performed by IO operations and can handle other business logic. 2 when the system kernel copies the data to user space, it sends the signal to the user thread or invokes the user thread callback function, telling the user that its IO operation is finished. 6. Comparison of five kinds of IO models
The difference between non-blocking io and AIO: 1 in non-blocking io, the user thread is still required to proactively check IO conditions, and the user thread is required to actively execute system calls to copy data to user space after the data is ready to be completed. 2 Aio is completely different, it is like a user thread will be the entire IO operation to the system kernel completed, the system core after IO operation to send a signal to inform the user thread, during this time the user thread does not need to do anything actively.
iii. common service-side models 1, BIO + main thread acceptor + multithreading worker
Communication process:1 service driven by a separate acceptor thread is responsible for listening (socket.accept) client connections. 2 After hearing the client connection request, a new worker thread for each client socket connection is acceptor for link processing. 3 after the worker thread completes processing the client socket connection request, it returns the response to the client through the output stream, then the socket connection is disconnected and the Worker thread is destroyed. Model Disadvantage:1 The Read and write operation of bio is synchronous blocking, the blocking time depends on the processing speed of the End-to-end IO thread and the transmission speed of network IO, and the reliability is poor.

2 The number of service end-threads and client concurrent access connections is 1:1.

3 with the increase of client concurrent traffic, the linear expansion of the service end-course number and the performance of the system decreased dramatically.

Optimization Method:

1 service end through the thread pool to handle multiple client access requests, through the thread pool to constrain service thread resources. form the proportional relationship between "client concurrent connection M" and "Maximum thread number n" of the service End-path pool (palliative care).

Sample code:

Netty-study Engineering Com.zhangyiwen.study.bio.multi_thread_server Package. 2. NIO multiplexing + reactor mode

reactor mode thought: Divide-conquer + event-driven

1) Divide and conquer

A connection complete network processing process is generally divided into accept, read, decode, process, encode, send these steps.

The reactor pattern maps each step to a task, and the smallest logical unit of the service's thread execution is no longer a complete network request, but a task, and executes in a non-blocking manner.

2 Event-driven

Each task corresponds to a specific network event. When the task is ready, reactor receives the corresponding network event notification and sends the task to the handler execution that binds the corresponding network event.

reactor Model diagram:

1) Single reactor thread


2) Multi Reactor Threads

With multiple reactor, each reactor executes in its own separate thread, which can respond to request events of multiple clients in parallel;

Netty using a similar pattern, the boss thread pool is multiple Mainreactor,worker thread pool is multiple subreactor


model Features: 1 The channel in NiO is Full-duplex, and channel can better map the API of the underlying operating system (UNIX network programming model, the underlying operating system channels are full-duplex, while supporting read and write operations);

2 The connection operation initiated by the client is asynchronous and does not need to be blocked like the client before;

3 JDK selector in mainstream Linux and other operating systems through the Epoll implementation, it does not have the limit of the number of connection handles, so that a reactor thread can handle thousands of client connections at the same time, and performance will not be with the increase in client connections and linear decline, Suitable for high-performance and high load network server solution.

Sample code:

Netty-study Engineering Com.zhangyiwen.study.nio.reactor_demo Package.
iv. Common client-side models 1, blocking the SEND/RECV mode Model Advantages: use SEND/RECV to write simple, suitable for the encapsulation of the synchronization interface. model Disadvantage: It is not suitable for asynchronous interfaces, and the thread cannot handle other business when recv. 2. Multiplex multiplexing Mode Advantages of the model: 1 can be encapsulated into a synchronous interface, can also be encapsulated into an asynchronous callback interface, more scalable. 2 The asynchronous callback interface can send multiple requests, and the data can be processed recv. v. Concurrent calls to the client library 1, blocking SEND/RECV mode + thread pool
Model Benefits:

Thread pooling is more convenient to use.
Model Disadvantage:
The number of dependent threads, set too little, concurrent processing power is too weak, set too much, thread switching is frequent.
2. NiO multiplexing + Connection pool Model Benefits:
You can avoid frequent thread switching.
Model Disadvantage:
If the official library does not provide this functionality, it can only resolve the protocol itself, and no thread pooling is convenient and quick to use.
3. Connection multiplexing Model Benefits:
In general, we customize the binary protocol, the Protocol design will have a unique serial number, RSP through this serial number to find the corresponding which req.
This allows multiple requests to be sent and received in response to a single link without the use of a thread pool and connection pooling.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.