IO event loop parsing and libevio event parsing in libev

Source: Internet
Author: User

IO event loop parsing and libevio event parsing in libev

1. Basic data structure of IO events ev_io

Struct ev_io this struct is an IO monitor. All events in libev have their own struct, such as ev_time and ev_io.

The base class ev_watcher is defined as follows:

typedef struct ev_watcher{    int active;     int pending;    int priority;    void *data;     void (*cb)(struct ev_loop *loop, struct ev_watcher *w, int revents);} 

In the base class, "active" indicates whether to activate the watcher, "pending" indicates whether the monitor is in the pending state, "priority" indicates its priority, and the callback function for the triggered action.

There is also a List of installed monitors matching the base class:

typedef struct ev_watcher_list{    int active;     int pending;    int priority;    void *data;     void (*cb)(struct ev_loop *loop, struct ev_watcher_list *w, int revents);    struct ev_watcher_list *next;} ev_watcher_list;

Ev_io is the basic structure for monitoring an IO event. Definition:

Typedef struct ev_io {int active; int pending; int priority; void * data; void (* cb) (struct ev_loop * loop, struct ev_io * w, int revents ); struct ev_watcher_list * next; int fd;/* fd, where events is a private member of the derived class, indicating the listened file fd and triggered event (readable or writable) */int events;} ev_io;

In the source code, ev_io is defined in ev. h. Some base classes and other macro definitions are nested in the original definition, which are directly written here for ease of understanding. You can see that the private variables of the derived class are placed behind the common part. In this way, when the pointer of C is forcibly converted, A pointer to the base class ev_watcher of the struct ev_io object, p can be accessed through p-> active to the derived class, which also represents the active member.

2. Initialization and setting of IO events

Initialization and setting are simple, as follows:

#define ev_io_init(ev,cb,fd,events)          do { ev_init ((ev), (cb)); ev_io_set ((ev),(fd),(events)); } while (0)#define ev_io_set(ev,fd_,events_)            do { (ev)->fd = (fd_); (ev)->events = (events_) | EV__IOFDSET; } while (0)

To initialize an IO event, you only need to call the ev_io_init () function. The ev parameter indicates the ev_io pointer, cb indicates the callback function that triggers the event, and fd indicates the file descriptor to be monitored, events indicates the monitored event.

3. Registration of IO events

First, understand struct ANFD. ANFD represents the basic information structure for monitoring a file descriptor fd in an event loop, which is defined as follows:

Typedef struct {WL head; // watch_list struct unsigned char events;/* monitored event */unsigned char reify;/* flag, used to mark ANFD to be re-instantiated (EV_ANFD_REIFY, ev1_iofdset) */unsigned char emask;/* the epoll backend stores the actual kernel mask in here */unsigned char unused; unsigned int egen; /* generation counter to counter epoll bugs */} ANFD;/* The epoll judgment and windows IOCP */

The first is the base-class monitor linked list of WL head. Here we only need to pay attention to a "head". He is the base-class linked list of wather. Here an ANFD indicates the monitoring of a file descriptor, so the description of this file is readable or writable. How is the monitoring action defined? Through this linked list, (The length of this linked list is generally no more than 3, and the file monitoring condition is nothing more than readable and writable.) All the monitors described in this file are mounted, this can be found through the file descriptor. The anfds mentioned above is the array of this object, and the subscript is indexed by the file descriptor fd. Anfds is an ANFD dynamic array. In this way, the anfds array is all I/O monitoring. Finally, you can use epoll_wait () to monitor events.

Every time a new IO monitor fd is added, call wlist_add () to add it to the head of the linked list of anfds [fd. If the monitoring condition of an anfds element changes, how can I modify the monitoring condition of this element. The subscript of anfds can be expressed by fd. Here there is a new array. The content of the array element is the fd of the newly added IO event to be monitored or the fd of the modified monitoring content, the array name is fdchanges, which is also a dynamic array. This array records the newly added or modified fd values. The specific implementation function is "fd_change"

Inline_size voidfd_change (EV_P _ int fd, int flags) {unsigned char reify = anfds [fd]. reify; anfds [fd]. reify | = flags; // flag, indicating that the fd monitoring condition has been modified if (expect_true (! Reify) // if the initial fd monitoring condition is null, it indicates the newly added fd {++ fdchangecnt; // fd counter plus array_needsize (int, fdchanges, fdchangemax, fdchangecnt, EMPTY2); // Add it to the fdchanges array fdchanges [fdchangecnt-1] = fd;} // if it is not a newly added fd, The fdchanges array already contains fd. Indicates that I/O monitoring for fd has been added before}

All fd to be monitored are stored in the fdchanges array. When we run ev_run, "fd_reify" is called to traverse the fdchanges array, if the monitoring condition of fd changes, the epoll_ctl () function is called to change the monitoring status of fd. This fdchanges array is used to record the file descriptor that the watcher monitoring condition in the anfds array may be modified, modify the system monitoring conditions by calling epoll_ctl or reusing other files as appropriate. NOTE: If we have already registered a read event on a watch on a fd, then we add another watch event or read event, but different callback functions, in this case, we should not call a system call such as epoll_ctrl (reduce system overhead) because our events set has not changed (indicating that the monitored event has not changed), so in order to achieve this goal, an events event exists in the anfd [fd] struct, which is the "|" operation of all original watcher events, the re-add descriptor operation to the epoll of the system is performed before the next event iteration starts. When we scan fdchangs in sequence and find the corresponding anfd structure, if we find that the previous events and the "|" operation results of all current watcher do not match, it means we need to call functions such as epoll_ctrl to make changes. Otherwise, we do not need to perform operations, that is, as a principle, before calling the system call, we have fully checked to ensure that no additional system call is performed! Fd_reify () is defined as follows:

Inline_size voidfd_reify (EV_P) {int I; for (I = 0; I <fdchangecnt; ++ I) {int fd = fdchanges [I]; // retrieve the fd ANFD * anfd = anfds + fd that may change the monitoring condition; // obtain the subscript ev_io * w in anfds; // The first ev_io pointer unsigned char o_events = anfd-> events; unsigned char o_reify = anfd-> reify; anfd-> reify = 0; /* if (expect_true (o_reify & EV_ANFD_REIFY) probably a deoptimisation */{anfd-> events = 0; for (w = (ev_io *) anfd-> head; w; w = (ev _ Io *) (WL) w)-> next) // mandatory conversion is used here. The for loop is used to obtain all the new monitoring event sets of fd, stored in the events member variable anfd-> events | = (unsigned char) w-> events; if (o_events! = Anfd-> events) // if the new monitoring event is different from the old monitoring event, o_reify = ev1_iofdset;/* actually | = * // modify the flag, indicates fd monitoring condition change} if (o_reify & ev1_iofdset) // fd monitoring condition change, call backend_modify, that is, epoll_ctl () to modify fd monitoring condition backend_modify (EV_A _ fd, o_events, anfd-> events);} fdchangecnt = 0; // The number of fdchanges arrays is cleared}

So, to sum up, the registration process is to get the file descriptor fd monitored through the previously set monitoring condition IO watcher (an instance of ev_io, find the ANFD structure anfds [fd] corresponding to anfds and mount the watcher to the head chain of the structure wlist_add (). Because the fd monitoring conditions have been changed, record the fd in the fdchanges array and call the system interface in subsequent steps to modify the fd monitoring conditions. The registration is as follows:

Eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, timeout * 1e3 );

If successful, the number of response events is returned, and fd_event () is executed ()

inline_speed voidfd_event (EV_P_ int fd, int revents){
/* do not submit kernel events for fds that have reify set */  
/* because that means they changed while we were polling for new events */
ANFD * anfd = anfds + fd; if (expect_true (! Anfd-> reify) // reify is 0
/* If reify is not 0, it indicates that we have added a new event to fd, not very familiar with */fd_event_nocheck (EV_A _ fd, revents);} fd_event_nocheck is as follows:
Inline_speed voidfd_event_nocheck (EV_P _ int fd, int revents) {ANFD * anfd = anfds + fd; ev_io * w; for (w = (ev_io *) anfd-> head; w; w = (ev_io *) (WL) w)-> next) // checks the monitor on fd in sequence, {int ev = w-> events & revents; // The event is triggered when the if (ev) // pending condition is met, add the monitor to the pendings [pri] [old_lenght + 1] location on the pendings [pri] In the pendings array.
          ev_feed_event (EV_A_ (W)w, ev);    }}void noinlineev_feed_event (EV_P_ void *w, int revents) EV_THROW{  W w_ = (W)w;  int pri = ABSPRI (w_);  if (expect_false (w_->pending))    pendings [pri][w_->pending - 1].events |= revents;  else    {      w_->pending = ++pendingcnt [pri];      array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, EMPTY2);      pendings [pri][w_->pending - 1].w      = w_;      pendings [pri][w_->pending - 1].events = revents;    }  pendingpri = NUMPRI - 1;}

Taking epoll as an example, when epoll_wait returns an fd_event, we can directly locate the watch list corresponding to fd. The length of this watch list generally does not exceed 3, fd_event will trigger an event. We use this event to perform the "&" operation with the event registered with each watch in sequence. If it is not 0, add the corresponding watch to pendings of the queue to be processed. (when watcher priority mode is enabled, pendings is a two-dimensional array. In this case, only the normal mode is considered)

Here we will introduce a new data structure, which indicates that the wather in pending is that the monitoring conditions are met, but the status of the action has not yet been triggered.

typedef struct{  W w;  int events; /* the pending event set for the given watcher */} ANPENDING;

HereW wIt should be known that it is the base class pointer previously mentioned. Pendings is a two-dimensional array of this type. It uses the watcher priority (libev can set the watcher priority, which is represented by a one-dimensional array subscript) as the first-level subscript. Next, the number of pengding monitors with this priority is a second-level subscript (for example, if the number of monitors on this fd is added with read and write, the subscript of the Two-dimensional array is 0 and 1 ), the pending value in the corresponding monitor is the result of this subscript plus one. It is definedANPENDING *pendings [NUMPRI]. Like anfds, the second-dimensional arrayANPENDING *Is an array that dynamically adjusts the size. After this operation. This series of operations can be considered as subsequent operations of fd_feed. The xxx_reify objective is to add the watcher of pending to the pengdings two-dimensional array. The same is true for the subsequent xxx_reify, Which is expanded when the type of monitor is analyzed. Here we use a chart to sort the structure.

Finally, execute the macro in the loop.EV_INVOKE_PENDINGIn fact, it is to call loop-> invoke_cb. If there is no custom modification (usually not modified), it is to callev_invoke_pending. This function traverses the two-dimensional array pendings in sequence and executes the trigger action callback function on each watcher of pending.

So far, the I/O triggering process is complete.

5. Summary

In Libev, watcher needs to calculate the most critical data structure. The entire logic is centered around watcher. Libev internally maintains a base class ev_wathcer and a derived class ev_xxx of several specific monitors. In use, a specific watcher instance is generated first. Set the trigger condition for the private member of the derived object. Then use anfds or the minimum heap to manage these watchers. Then, Libev uses backend_poll and time heap management to calculate the pending watcher. Then they are added to a two-dimensional array whose priority is one-dimensional. Call the trigger action callback function registered on the watcher of the pengding at an appropriate time, so that the "only-for-ordering" Priority model can be implemented in order of priority.

This blog is mainly written to make a learning record, and there will certainly be many mistakes in it. I read a lot of blog posts when I was learning IO events. These blogs are very helpful and I have learned from Daniel many times. I have also cited a lot of pictures and examples in their blog posts. If you have any questions, please let me know.

Http://my.oschina.net/u/917596/blog/177030

Https://cnodejs.org/topic/4f16442ccae1f4aa270010a3


Analysis and comparison of several classic network server architecture models

Preface event drivers are familiar to a large number of programmers. The most interesting thing is the application in graphic interface programming. In fact, event drivers are also widely used in network programming, and large-scale deployment in High-connection high-throughput server programs, such as http server programs, ftp server programs and so on. Compared with the traditional network programming method, event-driven greatly reduces resource usage, increases service reception capability, and improves network transmission efficiency. As for the server model mentioned in this article, you can refer to a lot of implementation code on the search network. Therefore, this article will not stick to the display and analysis of source code, but focus on the introduction and comparison of models. The server model using the libev event-driven Library provides the implementation code. This article involves the thread/time legend, which only indicates that the thread does have a blocking latency on each IO, but does not guarantee the correctness of the latency ratio and IO execution sequence. In addition, the interfaces mentioned in this article are only Unix/Linux interfaces that I am familiar with. Windows interfaces are not recommended. You can check the corresponding Windows interfaces on your own. Almost all of the network programming interfaces that programmers encounter for the first time start with interfaces such as listen (), send (), and recv. By using these interfaces, you can easily build server/client models. We assume that we want to create a simple server program to provide a single client with a content service similar to "one question and one answer. Figure 1. Simple Server/client model with one answer, we noticed that most socket interfaces are blocking. The so-called blocking interface means that a system call (generally an I/O interface) does not return the call result and keeps the current thread congested. It is returned only when the system call gets the result or times out and an error occurs. In fact, almost all I/O interfaces (including socket interfaces) are blocked unless otherwise specified. This brings a big problem to network programming. For example, when sending () is called, the thread will be blocked. During this period, the thread cannot perform any operations or respond to any network requests. This poses a challenge to network programming with multiple clients and multiple business logic. At this time, many programmers may choose multiple threads to solve this problem. The simplest solution for multi-threaded server programs to deal with multi-client network applications is to use multiple threads (or multi-process) on the server side ). Multi-thread (or multi-process) is designed to give each connection an independent thread (or process), so that the blocking of any connection will not affect other connections. The specific use of multi-process or multi-thread does not have a specific mode. Traditionally, the process overhead is much greater than the thread. Therefore, if you need to provide services for a large number of clients at the same time, multi-process is not recommended; if a single service execution body consumes a large amount of CPU resources, such as large-scale or long-term data operations or file access, the process is safer. Generally, use pthread_create () to create a new thread and fork () to create a new process. We assume that we have higher requirements for the above server/client model, that is, to allow the server to provide Q & A services for multiple clients at the same time. So we have the following model. Figure 2. in the preceding thread/time legend, the main thread continuously waits for client connection requests. If a connection exists, a new thread is created, and provides the same Q & A service for the queue in the new thread. Many beginners may not understand why a socket can be accept multiple times. In fact, the socket designer may leave a foreshadowing for the case of multiple clients, so that accept () can return a new socket. The following is the prototype of the accept interface: int accept (int s, struct sockaddr * addr, socklen_t * addrlen); input parameter s is from socket (), bind () and listen () the socket handle value that follows in. After bind () and listen () are executed, the operating system has started at the specified port... the remaining full text>

Analysis and comparison of several classic network server architecture models

Compared with the traditional network programming method, event-driven greatly reduces resource usage, increases service reception capability, and improves network transmission efficiency. As for the server model mentioned in this article, you can refer to a lot of implementation code on the search network. Therefore, this article will not stick to the display and analysis of source code, but focus on the introduction and comparison of models. The server model using the libev event-driven Library provides the implementation code. This article involves the thread/time legend, which only indicates that the thread does have a blocking latency on each IO, but does not guarantee the correctness of the latency ratio and IO execution sequence. In addition, the interfaces mentioned in this article are only Unix/Linux interfaces that I am familiar with. Windows interfaces are not recommended. You can check the corresponding Windows interfaces on your own. Almost all of the network programming interfaces that programmers encounter for the first time start with interfaces such as listen (), send (), and recv. By using these interfaces, you can easily build server/client models. We assume that we want to create a simple server program to provide a single client with a content service similar to "one question and one answer. Figure 1. Simple Server/client model with one answer, we noticed that most socket interfaces are blocking. The so-called blocking interface means that a system call (generally an I/O interface) does not return the call result and keeps the current thread congested. It is returned only when the system call gets the result or times out and an error occurs. In fact, almost all I/O interfaces (including socket interfaces) are blocked unless otherwise specified. This brings a big problem to network programming. For example, when sending () is called, the thread will be blocked. During this period, the thread cannot perform any operations or respond to any network requests. This poses a challenge to network programming with multiple clients and multiple business logic. At this time, many programmers may choose multiple threads to solve this problem. The simplest solution for multi-threaded server programs to deal with multi-client network applications is to use multiple threads (or multi-process) on the server side ). Multi-thread (or multi-process) is designed to give each connection an independent thread (or process), so that the blocking of any connection will not affect other connections. The specific use of multi-process or multi-thread does not have a specific mode. Traditionally, the process overhead is much greater than the thread. Therefore, if you need to provide services for a large number of clients at the same time, you are not recommended to use multiple processes; if a single service execution body consumes a large amount of CPU resources, such as large-scale or long-term data operations or file access, the process is safer. Generally, use pthread_create () to create a new thread and fork () to create a new process. We assume that we have higher requirements for the above server/client model, that is, to allow the server to provide Q & A services for multiple clients at the same time. So we have the following model. Figure 2. in the preceding thread/time legend, the main thread continuously waits for client connection requests. If a connection exists, a new thread is created, and provides the same Q & A service for the queue in the new thread. Many beginners may not understand why a socket can be accept multiple times. In fact, the socket designer may leave a foreshadowing for the case of multiple clients, so that accept () can return a new socket. The following is the prototype of the accept interface: int accept (int s, struct sockaddr * addr, socklen_t * addrlen); input parameter s is from socket (), bind () and listen () the socket handle value that follows in. After bind () and listen () are executed, the operating system has started to listen to all connection requests at the specified port. If there is a request, the connection request is added to the Request queue. The accept () interface is called to extract the first connection information from the request queue of socket s and create a new socket return handle similar to that of socket s. New so... remaining full text>

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.