Spice working principle and Code Analysis: The Network event processing model of the spice

Source: Internet
Author: User
Tags epoll int size socket

Http://www.cnblogs.com/D-Tec/archive/2013/03/21/2973339.html


0. Overview

Network event processing is the most critical part of the Libspice design, which can be said to be the entire spice skeleton, to support the operation of Spice, is one of the ways to understand how Spice works (VDI is another reading code in the mouth). Spice's server and client communication methods employ three frameworks:

1. Rotation network event in the main function of QEMU using non-blocking Select mode

2, Libspice a dedicated thread, using a non-blocking epoll model to monitor network events

3. The network data transmission using the Timer method in Qemu

First, select model processing

the most basic network event handling in Spice uses the Select model, which means that most network events are captured in the main function of QEMU. Look directly at the code:

void main_loop_wait (int nonblocking)

{

Iohandlerrecord *ioh;

Fd_set RfDs, Wfds, XfDs;

int ret, Nfds;

Nfds =-1;

Fd_zero (&rfds);

Fd_zero (&wfds);

Fd_zero (&xfds);

Fd_set processing of all nodes in the queue

Qlist_foreach (Ioh, &io_handlers, next) {

if (ioh->deleted)

Continue

Fd_set (IOH->FD, &rfds);

Fd_set (IOH->FD, &wfds);

}

Select

ret = SELECT (Nfds + 1, &rfds, &wfds, &xfds, &TV);

Call node corresponding callback function for network event processing

if (Ret > 0) {

Iohandlerrecord *pioh;

Qlist_foreach_safe (Ioh, &io_handlers, Next, Pioh) {

if (Ioh->fd_read && fd_isset (IOH->FD, &rfds)) {

Ioh->fd_read (Ioh->opaque);

}

if (Ioh->fd_write && fd_isset (IOH->FD, &wfds)) {

Ioh->fd_write (Ioh->opaque);

}

}

}

Qemu_run_all_timers ();

}

The above code follows the basic processing steps of the Select model: Fd_set, select, process, so it's very easy to understand. What's unique about this code is that its implementation supports dynamic management of network connections, and the idea is simple: by maintaining a global list of network connections, io_handlers the list before each select to get the network connection sockets that need to be queried. At the same time, each element of the list also records a read-write handler function for the socket whose element type is declared as follows:

typedef void Ioreadhandler (void *opaque, const uint8_t *buf, int size);

typedef int Iocanreadhandler (void *opaque);

typedef void Iohandler (void *opaque);

typedef struct IOHANDLERRECORD {

int FD; Socket descriptor

Iocanreadhandler *fd_read_poll;

Iohandler *fd_read; Read Event handling callback function

Iohandler *fd_write; Write Event handling callback function

int deleted; Delete tag

void *opaque;

struct POLLFD *UFD;

Qlist_entry (Iohandlerrecord) next; Linked list implementation

} Iohandlerrecord;

Io_handlers is a list-head pointer to an element of type Iohandlerrecord.

When a new network connection is established, you only need to initialize a Iohandlerrecord object and insert it into the list. QEMU implements a common function to complete the initialization of new connection objects and to insert the queue's actions:

int qemu_set_fd_handler2 (int fd, Iocanreadhandler *fd_read_poll,

Iohandler *fd_read, Iohandler *fd_write, void *opaque)

{

Creates a new node object and inserts it into the list

Iohandlerrecord *ioh;

IOH = Qemu_mallocz (sizeof (Iohandlerrecord));

Qlist_insert_head (&io_handlers, Ioh, next);

IOH->FD = FD;

Ioh->fd_read_poll = Fd_read_poll;

Ioh->fd_read = Fd_read;

Ioh->fd_write = Fd_write;

Ioh->opaque = opaque;

ioh->deleted = 0;

return 0;

}

With the above package, the management of network event sockets and the processing of network events can be separated, and the management part is a unified process as described above, and will not change because of the change of the specific business. In spice, for example, QEMU only needs to be responsible for the monitoring of network events, and the specific event handling is assigned to the registrant of this event.

The registration of network events goes through a layer of encapsulation, and finally we see the corresponding function assigned to the CORE->WATCH_ADD function pointer in coreinterface initialization, encapsulated as follows:

static Spicewatch *watch_add (int fd, int event_mask, spicewatchfunc func, void *opaque)

{

Spicewatch *watch;

Watch = Qemu_mallocz (sizeof (*watch));

WATCH->FD = FD;

Watch->func = func;

Watch->opaque = opaque;

Qtailq_insert_tail (&watches, watch, next);

{

Iohandler *on_read = NULL;

Iohandler *on_write = NULL;

Watch->event_mask = Event_mask;

if (Watch->event_mask & Spice_watch_event_read) {

On_read = Watch_read; Internal call func (Spice_watch_event_read);

}

if (Watch->event_mask & Spice_watch_event_write) {

On_read = Watch_write; Internal call func (Spice_watch_event_write);

}

///The following function is actually encapsulating the Qemu_set_fd_handler2

Qemu_set_fd_handler (WATCH->FD, On_read, On_write, watch);

}

return watch;

}

After the above package, the Libspice can concentrate on their own affairs, do not need to worry about how the network events notify their own problems. If you need to add new business processes, such as adding remote USB device support, you only need to implement all the processing functions in Libspice, the client's USB module initiates a network connection, Libspice calls Coreinterface Watch_add callback, Register this connection and the corresponding handler function in QEMU.

In addition, to migrate spice to other platforms, to keep the Libspice code from being reused, the network processing part of QEMU must be ported. The implementation of the above package makes the porting of network processing very simple.

Second, Epoll model processing

The model is used only in display processing threads to process network messages within the process. It has been mentioned several times that display processing is implemented in Libspice by a separate thread, which involves the communication problem between multiple threads. Spice creates a communication pipeline inside the process through the socket pair, exposing one end of the pair to the module to communicate with the current thread, including QEMU's virtual graphics device, Libspice message dispatcher, etc. The other end is left to the current thread for data sending and receiving. The implementation framework for this worker thread is as follows:

void *red_worker_main (void *arg)

{

for (;;) {

struct Epoll_event events[max_epoll_sources];

int num_events;

struct Epoll_event *event;

struct Epoll_event *end;

Wait for network event

num_events = epoll_wait (Worker.epoll, events, max_epoll_sources, worker.epoll_timeout);

Worker.epoll_timeout = inf_epoll_wait;

Handle all the event

for (event = events, end = event + num_events; event < end; event++) {

EventListener *evt_listener = (EventListener *) event->data.ptr;

if (Evt_listener->refs > 1) {

Evt_listener->action (Evt_listener, event->events);

if (--evt_listener->refs) {

Continue

}

}

Free (Evt_listener); Refs = = 0, release it!

}

if (worker.running) {

int ring_is_empty;

Red_process_cursor (&worker, Max_pipe_size, &ring_is_empty);

Red_process_commands (&worker, Max_pipe_size, &ring_is_empty);

}

Red_push (&worker);

}

red_printf ("Exit");

return 0;

}

Third, timer timer

Timers are another key event-triggering mechanism for QEMU and one of the scourge that affects code reading. Go back to the main_loop_wait function above, and finally there is a sentence qemu_run_all_timers (); The function iterates through all timers in the system to execute the trigger function of the timer. The main_loop_wait function is encapsulated in the following Main_loop function:

static void Main_loop (void)

{

for (;;) {

do {

BOOL nonblocking = false;

Main_loop_wait (nonblocking);

} while (Vm_can_run ());

......

}

That is, the system will not stop calling the Main_loop_wait function to rotation network events and timers. The above describes the QEMU timer trigger mechanism, the following see the specific implementation of the timer and how to use.

QEMU's qemu-timer.c is specifically designed to implement the timer code, which maintains a global list of linked list active_timers, which is used to save various types of timer-linked header pointers in the system, like a hash table, All timer lists are sorted by the activation time of each timer, so you can reduce the query time and maximize the accuracy of the timer execution. The timer node data structure in the linked list is defined as follows:

struct Qemutimer {

Qemuclock *clock; Timer status and type

int64_t Expire_time; Timer activation time

QEMUTIMERCB *CB; callback function pointer to be executed when the timer is activated

void *opaque; User data, as the entry parameter for the timer callback function

struct Qemutimer *next;

};

A new timer is added via the Qemu_new_timer interface, but a timer is not inserted into the global array, but the timer is actually inserted into the list only when Qemu_mod_timer is called. Timers registered in the above way are usually only executed once, in order to implement a periodic timer, you only need to add yourself to the timer list in the callback function implementation of the timer. Coreinterface's other set of function pointers is about the timer. This timer should be inefficient, but platform-dependent requirements are low.

After some network connection is established, the data sending is timed by timer, the most typical is the production of audio data and push to the client. Once the audio device is initialized, a recurring timer is registered and the audio data is looped through the network connection to the client.

This site article is the original station technician, without permission, no reprint. If you want to reprint, you can leave a message inside the station. Reproduced in this article should be in the article clearly marked the author "Kafr Ad-dawwar era" and attached to the original link, so that readers find the original version of the update.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.