Asynchronous non-blocking

Source: Internet
Author: User
Tags epoll

 

The advantages of using event-driven, asynchronous programming are discussed first:

Full use of system resources, execution code does not need to block waiting for some kind of operation to complete, limited resources can be used for other tasks. It is ideal for back-end network service programming.

In server development, concurrent request processing is a big problem, and blocking functions can lead to wasted resources and time delays. With event registration, asynchronous functions, developers can increase the utilization of resources and improve performance. Both its nginx and node. JS handles concurrency in event-driven asynchronous nonblocking mode. In which Nginx is handled concurrently using Epoll,poll,queue, node. JS uses Libev, which handles large-scale HTTP requests very well.

Blocking

This is defined by the node. JS Development Guide: When a thread encounters (I/O operations), such as disk read-write or network traffic, in execution, it usually takes a long time for the operating system to deprive the thread of CPU control, suspend execution, and cede resources to other worker threads. This method of thread scheduling is called blocking. When the I/O operation is complete, the operating system relieves the blocking state of the thread and restores its control over the CPU to continue execution. This I/O pattern is the usual synchronous I/O (synchronous I/O) or blocking I/O (Blocking I/O).


A blocking call means that the current thread is suspended until the call results are returned. Functions are returned only after the result is obtained. One might equate blocking calls with synchronous invocations, in fact they are different. For synchronous calls, many times the current thread is still active, but logically the current function does not return. For example, we call the receive function in CSocket, and if there is no data in the buffer, the function waits until there is data to return. At this point, the current thread will continue to process a wide variety of messages. If the main window and the calling function are in the same thread, the main interface should be refreshed unless you call in a special interface action function. Another function that the socket receives data recv is an example of a blocking call. When the socket is working in blocking mode, if the function is called without data, the current thread is suspended until there is data.

Non-blocking

Non-blocking is defined so that when a thread encounters an I/O operation, it does not wait for the completion of the I/O operation or the return of the data in a blocking manner, but simply sends the I/O request to the operating system and proceeds to the next statement. When the operating system completes an I/O operation, the thread that notifies the I/O operation as an event is handled by the thread at a particular time.

Contrast blocking versus non-blocking

In blocking mode, a thread can only handle one task, and the throughput must be multithreaded if it is to be improved.

In non-blocking mode, a thread is always performing a compute operation, and the CPU core utilization used by this thread is always 100%,i/o as an event notification.

In blocking mode, multithreading tends to improve system throughput because when one thread is blocked and other threads are working, multithreading can make the CPU resources not wasted by the threads in the blocking.

In non-blocking mode, threads are not blocked by I/O and are always using the CPU. The benefit of multithreading is simply to use more cores in the case of multicore CPUs.

Take a look at the interpretation of asynchronous I/O by the "in-depth node. js", in the operating system, the space of the program is divided into kernel space and user space. We often mention asynchronous I/O, which is essentially a program in user space that does not rely on I/O operations in the kernel space to actually complete the next task.

Blocking and non-blocking interpretation of I/O

Blocking mode I/O causes the application to wait until I/O is complete. While the operating system also supports the setting of I/O operations to non-blocking mode, the application's call may return immediately without getting the real data, which requires multiple calls to confirm the I/O operation is complete.

Synchronous and asynchronous I/O synchronization and asynchronous I/O appear in the application. If you do block I/O calls, the process of the application waiting for the call to complete is a synchronous state. Instead, the application is asynchronous when I/O is non-blocking mode.

In reference to the interpretation of synchronization in the node. JS Entry Classic, the synchronized code means that each time an operation is performed, the execution of the code is blocked and cannot be moved to the next operation until an operation is completed. This means that the execution of the code stops before the function returns. The code does not continue until the function returns.

In contrast, async means that the execution of a function does not have to wait for the result of an operation to proceed, and the result of its operation is handled by the callback when the event occurs.

Advantages and disadvantages of asynchronous I/O

The advantage of using synchronous IO is that it can make the program easy to debug, but its shortcomings are obvious, in the course of the execution of the program, if into some time-consuming IO operations, the execution of the program to wait for the completion of the IO, in this waiting process, the program can not fully utilize the CPU, causing the CPU idle, In order to take full advantage of CPU, and IO parallel operation, there are 2 common methods:

(1) Multithreading single process

The design of multithreading is to realize the parallel processing task in the shared program space, so as to make full use of the CPU effect.

Multithreading Disadvantages:

First, the execution time (thread switching) context exchange is expensive, a thread needs about 2M of memory space, occupy large resources.

Second, the state synchronization (lock) problem, it also makes the program writing and calling complex.

(2) Single thread multi-process

In order to avoid the inconvenience caused by multithreading, some languages choose single-threaded to keep the call simple, adopt the way of initiating multi-process to make full use of CPU and improve the overall parallel processing ability. Its disadvantage lies in the complexity of the business logic (involving multiple I/O calls), because the business logic cannot be distributed across multiple processes and the transaction length is much larger than multithreaded mode.

asynchronous I/O and polling technology

When a non-blocking I/O call is made, to read the full data, the application needs to poll multiple times to ensure that the read data is complete for the next steps. The disadvantage of polling technology is that applications are actively invoked, resulting in more CPU time slices and lower performance. The existing polling technology has the following: Read, select, poll, Epoll, Pselect, Kqueue

Read is one of the lowest performing, which checks the status of I/O through repeated calls to complete the data read.

Select is an improved scheme that is judged by the state of events on the file descriptor.

The operating system also provides multiplexing techniques such as poll, Epoll, and more to improve performance.

Polling technology satisfies the assurance that asynchronous I/O ensures full data acquisition. But for applications, it can still count as a synchronization, because applications still need to proactively determine the state of I/O, and still spend a lot of CPU time waiting. The previous method repeatedly calls read to poll until it finally succeeds, and the user program consumes more CPU and performs poorly. In fact, the operating system provides a Select method to replace this repeated read polling for state judgment. Select internally determines whether the data is fully read by examining the event state on the file descriptor. But it is still only a synchronization for the application, because the application still needs to proactively determine the status of I/O, and it still spends a lot of CPU time waiting, and select is a poll.

An ideal asynchronous I/O model

The ideal asynchronous I/O should be for an application to initiate an asynchronous call without polling to handle the next task, simply passing the data to the application via a signal or callback after I/O is complete.

For the time being tidy so much, feel a lot of things have been seen forgotten, back to write a detailed example of the use of Epoll, the example supports 2W concurrency is passed. Alas, the state is not good today, write bad, this intends to add something to their own, the result is to refer to others, if there are errors please correct me, thank you.

Reference: Http://blog.csdn.net/feitianxuxue

Http://www.cnblogs.com/linjiqin/p/4472367.html

Asynchronous non-blocking

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.