[reproduced] I have read the best Epoll model explained

Source: Internet
Author: User
Tags epoll

Reprinted from: http://blog.csdn.net/mango_song/article/details/42643971

First, we define the concept of a stream, a stream that can be a kernel object for I/O operations, such as files, Socket,pipe, and so on.      Whether it's files, sockets, or pipelines, we can all think of them as streams.      Then we'll talk about I/O, and through read we can read the data from the stream, and with write we can write the data to the stream. Now assume a situation where we need to read the data from the stream, but there is no data in the stream (the typical example is that the client is going to read the data from the socket, but the server has not transmitted the data back).   Blocking: What is the concept of blocking? For example, you are waiting for a courier at some time, but you do not know when the courier comes over, and you have nothing else to do (or the next thing to wait for the courier to do); then you can go to bed, because you know the courier will give you a call when delivery (assuming you can wake up).   Non-blocking busy polling: Then the above, such as the example of the courier, if using a busy polling method, then you need to know the courier's mobile phone number, and then every minute to call him: "Have you arrived?" "     It's obvious that the average person does not use the second approach, not only to be very brain-free, to waste the charge, but also to take up a lot of time for courier staff.     Most programs will not use the second approach, because the first method is economical and simple, the economy refers to the consumption of very little CPU time, if the thread sleeps, it will fall out of the system scheduling queue, temporarily do not partition the CPU precious time slices.      To understand how blocking is going, let's talk about buffers, as well as kernel buffers, and finally explain I/O events. The introduction of buffers is a frequent system call to reduce frequent I/O operations (you know it is slow), and when you manipulate a stream, it is more of a buffer unit than the user space. Buffers are also required for the kernel.   Suppose there is a pipeline, process A is the writer of the pipeline, and B is the read-out side of the pipeline.   Assuming that the kernel buffers are empty at first, B is blocked as a read-out party. Then first a to the pipeline write, when the kernel buffer from the empty state to non-empty state, the kernel will produce an event tells B to wake up, this event is called "Buffer non-empty".     However, after the "buffer non-null" event notifies B, B has not yet read the data, and the kernel has promised not to discard the data in the write pipeline, a write data will be stuck in the kernel buffer, if the kernel is full, b still does not begin to read the data, the kernel buffer will be filled , this time will produce a i/o event, tell process A, you should wait (block), we define this event as "buffer full".   Assuming that B finally began to read the data, then the kernel buffer is empty, the kernel will tell a, the kernel buffer is empty, you can wake up from the sleep, continue to write the data, we call this event "buffer is not full"     Perhaps the event Y1 has notified a, but a has no data written, and B continues to read the data, knowing that the kernel buffer is empty. This time the kernel tells B that you need to block it! , we set the time to "buffer empty".   These four scenarios cover four I/O events, buffer full, buffer empty, buffer non-empty, buffer not full (note is the kernel buffer, and these four terms are I sporogenous, only to explain the principle of the creation). These four I/O events are the root of a blocking synchronization. (If you do not understand what "sync" is, learn about the operating system's lock, semaphore, condition variables, and other task synchronization aspects.)      Then let's talk about the drawbacks of blocking I/O. However, in blocking I/O mode, a thread can handle only one stream of I/O events. If you want to work with multiple streams at the same time, either multi-process (fork) or multithreaded (pthread_create), unfortunately neither of these methods is efficient.     Then consider the I/O mode of non-blocking busy polling, and we find that we can handle multiple streams at the same time (switching a stream from blocking mode to nonblocking mode is not discussed again):
 while true {      for in  stream[]; {          if  i has data          read until unavailable      }  }  
All we have to do is ask all the flow from beginning to end and start over again. This makes it possible to process multiple streams, but this is obviously not a good idea, because if all the streams have no data, the CPU will be wasted.    To add that, in blocking mode, the kernel handles I/O events as blocking or waking, while non-blocking mode gives the I/O events to other objects (select and Epoll described later) and even ignores them directly. In order to avoid the CPU idling, you can introduce an agent (at first there is a proxy called Select, and later a proxy called poll, but the two are the same nature). This agent is very powerful, can observe many streams of I/O events at the same time, in idle time, will block the current thread, when there is one or more flows with I/O events, wake up from the blocking state, so our program will poll all the stream (so we can get the "busy" word removed). The code looks like this:
 while true {    select(streams[])    for in  streams[] {        if  I have data        read until unavailable    }}
Thus, if there is no I/O event, our program will block at select.    But there's still a problem, and we just know from select that there is an I/O event, but I don't know what the flow is (there may be one, multiple, or even all), and we can only poll all the streams with no difference, find the streams that can read the data, or write the data and manipulate them. But with SELECT, we have an O (n) non-differential polling complexity, and the more streams we handle, the longer the polling time is not once undifferentiated. Again said so much, finally can explain epoll the epoll can be understood as the event poll, different from the busy polling and non-differential polling, Epoll will be which stream what happened to the I/O event notify us. At this point we make sense of the operations of these streams. (The complexity is reduced to O (1)) before discussing the implementation details of the Epoll, the Epoll related actions are listed:
Epoll_create Create a Epoll object, general EPOLLFD = epoll_create ()    epoll_ctl (Epoll_add/epoll_del), add to Epoll object Delete an event of a stream  such as  epoll_ctl (EPOLLFD, Epoll_ctl_add, Socket, epollin); // register buffer non-empty event with data inflow   Epoll_ctl (EPOLLFD, Epoll_ctl_del, Socket, epollout); // Register buffer is not full event, that is, the  stream can be written to epoll_wait (EPOLLFD,...) Wait until the registered event occurs  (note: When a buffer full or buffer is empty for a non-blocking stream, write/read returns1and sets Errno=eagain. Epoll only cares about buffer non-full and buffer non-empty events).  
The code for a epoll pattern might look like this:
 while true {      = epoll_wait (EPOLLFD)      for in  active_stream[] {          Read or write Till      }   }  
Confined to space, I only say so much to reveal the original rational things, as for the use of epoll details, please refer to man and Google, for details, see Linux kernel source.

[reproduced] I have read the best Epoll model explained

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.