Indicate the source and author's contact information during reprinting.
Article Source: http://blog.csdn.net/jack0106
Author contact: Feng Yu fengjian0106@yahoo.com.cn
I have been developing Linux programs for some time and have used several UI libraries, including GTK, QT, and clutter. The most mysterious is the so-called "Main Event loop". In QT, It is qapplication, GTK is gtk_main (), and clutter is clutter_main (). These event loop objects are encapsulated very tightly. When used, the code is very simple. In the process of writing an application, we usually only need to reload the event processing function of the widget (or handle the signal corresponding to the event). As for how the event is generated and transmitted, this is a mystery.
The recent time is quite abundant. I carefully studied the event loop. The reference code is gmainloop in glib. Gtk_main () and clutter_main () are both based on gmainloop. In addition, the concept of event loop is not only used in Ui programming, but also in network programming. In this way, event loop is a programming model, the most basic concept. Unfortunately, this concept has never been seen in university textbooks. It is not used when playing single-chip microcomputer. Event loop is available only in an operating system environment.
The code basis of event loog also needs to use the concept of I/O multiplexing. Currently, three common API interfaces are available: Select, poll, and epoll. Glib is a cross-platform library. in Linux, it uses the poll function and select on the window. The epoll interface was officially launched in linux2.6, which is more efficient than the previous two and is widely used in network programming. Essentially, these three functions are actually the same.
If you do not know about I/O multiplexing, you should first learn from Google. The following is a sample code model that uses the poll interface.
... <Br/> struct pollfd FDS [2]; <br/> int timeout_msecs = 500; <br/> int ret; <br/> int I; <br/>/* Open streams device. */<br/> FDS [0]. FD = open ("/dev/dev0 ",...); <br/> FDS [1]. FD = open ("/dev/dev1 ",...); <br/> FDS [0]. events = pollout | pollwrband; <br/> FDS [1]. events = pollout | pollwrband; <br/> while (1) {<br/> ret = poll (FDS, 2, timeout_msecs); <br/> If (Ret> 0) {<br/>/* an event on one of the FDS has occurred. */<br/> for (I = 0; I <2; I ++) {<br/> If (FDS [I]. revents & pollwrband) {<br/>/* priority data may be written on device number I. */<br/>... <br/>}< br/> If (FDS [I]. revents & pollout) {<br/>/* data may be written on device number I. */<br/>... <br/>}< br/> If (FDS [I]. revents & pollhup) {<br/>/* a hangup has occurred on device number I. */<br/>... <br/>}< br/>...
In the above code, we can split it into three parts:
1. Prepare the collection of files to be checked (not simply the collection of file descriptors, but the collection of struct pollfd struct. This includes file descriptors and events to be monitored, such as readable, writable, or other operations ).
2. Execute poll, wait for the event to occur (the file descriptor corresponding to the file is readable/writable/or other operations can be executed), or the function to return timeout.
3. traverse the file set (a collection of struct pollfd struct) to determine which files have "events" and further determine the "events ". Then, execute the corresponding operation as needed (in the above Code, the corresponding operation is represented ).
The code corresponding to 2 and 3 is put in a while loop. The so-called "corresponding operation" in 3 can also include an "exit" operation. In this way, you can exit from the while loop. In this way, the whole process also has the opportunity to end normally.
Remind me again, please first understand the above Code, it is better to have practical experience, it is more helpful to understand.
The following describes the key points. This code is just a demo, so it is very simple. However, from another perspective, this code snippet is rigid, especially for new users or friends who have no I/O multiplexing experience, it is easy to be "framed" by this code model ". Can it become more flexible? How can we become more flexible? Before explaining this in detail, I would like to raise a few small questions.
1. The previous code opened only two files and passed them to the poll function. What if I want to dynamically add or delete files monitored by the poll function while the program is running?
2. the timeout value set in the previous code is fixed. Suppose that at a certain time point, 100 files need to be monitored, and for these 100 different files, each file is expected to set a different timeout time. What should I do?
3. In the previous Code, when the poll function returns and traverses the file set, it judges one by one and executes "corresponding operations ". If 100 files are monitored, and all the 100 files meet the conditions when poll returns, you can perform "corresponding operations ", the "corresponding operations" of the 50 files are time-consuming, but they are not so urgent (they can be processed later, for example, they will be processed when the next round of poll returns ), the "corresponding operations" of the other 50 files need to be executed immediately, and a new event will occur soon (next time in poll) and meet the conditions for judgment. What should I do?
For the 1st questions, we can think of the need to perform a unified management of all the files (struct pollfd), and the need to have the function of adding and deleting files. From the perspective of Object-oriented thinking, this is a class, currently called Class.
For the 2nd questions, we can think of the need for more control over each monitored file (struct pollfd. You can also use a class to wrap the monitored file and manage the file. The object contains the struct pollfd struct, this class also provides the expected timeout time for the corresponding file. Class B.
For the 3rd problems, you can consider setting a priority for each monitored file, and then you can perform "corresponding operations" that are more "urgent" based on the priority ". This priority information can also be stored in Class B. After class B is designed, Class A no longer directly manages files in a unified manner, but changes to a unified management class B, which can be viewed as a container class of Class B.
With these three answers, you can add the code snippet to the code and re-assemble it to make it more flexible. Gmainloop in glib is doing this. Besides the content described in the three answers, it is even more surprising ".
:-), I want to remind you again. The following describes gmainloop. Therefore, it is best to use gmainloop first, including g_timeout_source_new (guint interval), g_idle_source_new (void) and g_child_watch_source_new (gpid pid ). By the way, the best way to learn programming is to read code and high-quality code.
Later, we will introduce the implementation mechanism of gmainloop from the perspective of principle, rather than the scenario analysis of code. To read the code in detail, you still need to practice it honestly. The subsequent introduction is just to help you better understand the source code.
The main event loop framework of glib is implemented by three classes: gmainloop, gmaincontext, and gsource. gmainloop is only a shell of gmaincontext. The most important thing is gmaincontext and gsource. Gmaincontext is equivalent to Class A mentioned above, while gsource is equivalent to Class B mentioned above. In principle, the internal implementation of the g_main_loop_run (gmainloop * loop) function is consistent with the while loop in the previous code snippet. (It should also be noted that in a multi-threaded environment, the implementation of gmainloop code is complicated. To make it easier to learn, you can ignore the thread-related code in gmainloop, in this way, the overall structure is consistent with the previous code snippet. The subsequent explanations and code snippets all omit thread-related code, which does not affect the learning and understanding of event loop ).
1. gsource ---- gsource is equivalent to Class B mentioned above, which stores priority information. At the same time, gsource needs to manage the corresponding files (Save the pointer of the struct pollfd struct, and stored in the form of a linked list). Moreover, the correspondence between gsource and managed files is not 1-to-1, but 1-to-N. This N, it can even be 0 (this is a "surprise", which will be explained in more detail later ). Gsource must also provide three important functions (from an object-oriented perspective, gsource is an abstract class and has three important pure virtual functions, which must be implemented by subclasses ), the three functions are:
Gboolean (* prepare) (gsource * Source, Gint * timeout _);
Gboolean (* Check) (gsource * Source );
Gboolean (* dispatch) (gsource * Source, gsourcefunc callback, gpointer user_data );
Let's take a look at the three parts in the previous code snippet. This prepare function is called in the first part. The check and dispathch functions are called in Part 1. One difference is that the prepare function should also be placed in the while loop, rather than outside the loop (because the files monitored by the poll function need to be dynamically added or deleted ).
The prepare function is called before poll is executed. Whether the struct pollfd in the gsource is expected to be monitored by the poll function is determined by the return value of the prepare function. At the same time, the expected timeout value of the gsource is also returned by the timeout _ parameter.
The check function is called after poll is executed. Whether an event occurs in struct pollfd in the gsource is described by the return value of the check function (in the check function, return information in the struct pollfd struct can be checked ).
The dispatch function is called after poll and check functions are executed. The dispatch function is called only when the corresponding check function returns true, it is equivalent to "corresponding operation ".
2. gmaincontext ---- gmaincontext is the container of gsource. gsource can be added to gmaincontext (the struct pollfd in gsource is also added to gmaincontext indirectly ), gsource can also be removed from gmaincontext (the struct pollfd in gsource is indirectly removed from gmaincontext ). Gmaincontext can traverse gsource and naturally has the opportunity to call the prepare/check/dispatch function of each gsource. It can be determined based on the return value of the prepare function of each gsource, whether it should be in the poll function, monitors the files managed by the gsource. Of course, it can be sorted according to the priority of gsource. After poll is returned, you can determine whether to call the corresponding dispatch function based on the return value of the check function of each gsource.
The following is a key code snippet. The g_main_context_iterate () function is equivalent to the action in the loop body in the previous code snippet. The launch of the loop is identified by the mark variable "loop-> is_running.
Void g_main_loop_run (gmainloop * loop) <br/>{< br/> gthread * Self = g_thread_self; <br/> g_return_if_fail (loop! = NULL); <br/> g_return_if_fail (g_atomic_int_get (& Loop-> ref_count)> 0); <br/> g_atomic_int_inc (& Loop-> ref_count ); <br/> loop-> is_running = true; <br/> while (loop-> is_running) <br/> g_main_context_iterate (loop-> context, true, true, self ); <br/> unlock_context (loop-> context); <br/> g_main_loop_unref (loop); <br/>}< br/> static gboolean g_main_context_iterate (gmaincontext * context, gboolean Bloc K, <br/> gboolean dispatch, gthread * Self) {<br/> Gint max_priority; <br/> Gint timeout; <br/> gboolean some_ready; <br/> Gint NFDs, allocated_nfds; <br/> gpollfd * FDS = NULL; <br/> unlock_context (context); <br/> If (! Context-> cached_poll_array) {<br/> context-> encoding = context-> n_poll_records; <br/> context-> cached_poll_array = g_new (gpollfd, context-> n_poll_records ); <br/>}< br/> allocated_nfds = context-> cached_poll_array_size; <br/> FDS = context-> cached_poll_array; <br/> unlock_context (context ); <br/> g_main_context_prepare (context, & max_priority); <br/> while (NFDs = g_main_context_query (Conte XT, max_priority, & timeout, FDS, <br/> allocated_nfds) {<br/> lock_context (context); <br/> g_free (FDS ); <br/> context-> cached_poll_array_size = allocated_nfds = NFDs; <br/> context-> cached_poll_array = FDS = g_new (gpollfd, NFDs); <br/> unlock_context (context ); <br/>}< br/> If (! Block) <br/> timeout = 0; <br/> g_main_context_poll (context, timeout, max_priority, FDS, NFDs); <br/> some_ready = g_main_context_check (context, max_priority, FDS, NFDs); <br/> If (dispatch) <br/> g_main_context_dispatch (context); </P> <p> lock_context (context ); <br/> return some_ready; <br/>}
Take a closer look at the g_main_context_iterate () function and divide it into three parts, which correspond to the three parts of the preceding code snippet.
1. The first part is the collection of files to be checked.
Terminate (context, & max_priority); <br/> while (NFDs = g_main_context_query (context, max_priority, & timeout, FDS, <br/> allocated_nfds) {<br/> lock_context (context); <br/> g_free (FDS); <br/> context-> cached_poll_array_size = allocated_nfds = NFDs; <br/> context-> cached_poll_array = FDS = g_new (gpollfd, NFDs); <br/> unlock_context (context); <br/>}
The first is to call g_main_context_prepare (context, & max_priority). This is to traverse each gsource, call the prepare function of each gsource, and select a max_priority with the highest priority, the function actually calculates a minimum timeout value.
Then, call g_main_context_query. In fact, this is to traverse each gsource again and add struct pollfd in gsource with a priority equal to max_priority to the monitoring set of poll.
This priority is also a "surprise ". According to the general idea, when a file needs to be monitored, it will be immediately put into the monitoring set, but with the concept of priority, we can have a "hidden background task". g_idle_source_new (void) is the most typical example.
2. In Part 2, execute poll and wait for the event to occur.
If (! Block) <br/> timeout = 0; <br/> g_main_context_poll (context, timeout, max_priority, FDS, NFDs );
It is to call g_main_context_poll (context, timeout, max_priority, FDS, NFDs), g_main_context_poll is just a simple encapsulation of the poll function.
3. Part 3: traverse the file set (a collection of struct pollfd struct) and perform corresponding operations.
Some_ready = g_main_context_check (context, max_priority, FDS, NFDs); <br/> If (dispatch) <br/> g_main_context_dispatch (context );
The general idea may be this pseudo-code form (this form is also consistent with the form of the previous code snippet)
Foreach (all_gsouce ){
If (gsourc-> check ){
Gsource-> dispatch ();
}
}
In fact, the glib processing method is to traverse all gsources, execute g_main_context_prepare (context, & max_priority), and call the check function of each gsource, then add the gsource that meets the condition (check function returns true gsource) to an internal linked list.
Then execute g_main_context_dispatch (context) to traverse the gsource in the prepared internal linked list and call the dispatch function of each gsource.
OK. The analysis is over. To sum up, we need to first understand how to use the poll function and establish the concept of I/O multiplexing. Then, we recommend that you take a look at the source code implementation of gmaincontext to help you understand it.