QEMU Thread Pool Introduction

Source: Internet
Author: User

Sometimes we want to execute a part of the work asynchronously by creating a thread so that we can continue to perform other tasks while executing the TASK. however, If this demand is more frequent, creating and destroying threads frequently leads to a lot of performance loss. This problem can be avoided if we can create one or more threads and then reuse Them.

The implementation of QEMU

Qemu imitates glib realizes the function of thread pool, the current QEMU thread pool is mainly applied to the support of raw files, when Linux-aio is not available, like glibc, the AIO mechanism is implemented through THREADING. We can also see that the data structure threadpoolelement, which represents the thread members in the thread pool, contains the BLOCKAIOCB structure used to describe the Aio. The relevant data structures are as Follows:

First look at the data structures associated with the thread pool:

typedef struct AIOCBINFO {

void (*cancel_async) (BLOCKAIOCB *acb);

Aiocontext * (*get_aio_context) (BLOCKAIOCB *acb);

size_t aiocb_size;

} aiocbinfo;

struct BLOCKAIOCB {

Const Aiocbinfo *aiocb_info;

Blockdriverstate *bs;

Blockcompletionfunc *cb;

void *opaque;

int refcnt;

};

struct Threadpoolelement {

BLOCKAIOCB common;

ThreadPool *pool;

Threadpoolfunc *func;

void *arg;

/* moving state out of thread_queued are protected by lock. After

* That is, the worker thread can write to it. Reads and writes

* of state and RET is ordered with memory Barriers.

*/

Enum ThreadState state;

int ret;

/* Access to this list are protected by lock. */

Qtailq_entry (threadpoolelement) reqs;

/* Access to this list are protected by the global Mutex. */

Qlist_entry (threadpoolelement) all;

};

struct ThreadPool {

Aiocontext *ctx;

Qemubh *completion_bh;

Qemumutex lock;

Qemucond worker_stopped;

Qemusemaphore sem;

int max_threads;

Qemubh *new_thread_bh;

/* The following variables is only accessed from one Aiocontext. */

Qlist_head (, Threadpoolelement) HEAD;

/* The following variables is protected by lock. */

Qtailq_head (, Threadpoolelement) request_list;

int cur_threads;

int idle_threads;

int new_threads; /* Backlog of threads we need to create */

int pending_threads; /* threads created but not running yet */

BOOL stopping;

};

The ThreadPool data structure is responsible for maintaining thread members inside the thread pool, the thread is created through the bottom half, and the ThreadPool has 5 counters that maintain thread members in different states, Max_threads is responsible for counting the maximum number of threads allowed to be created in the thread pool; New_threads is responsible for counting the number of threads that need to be created; pending_threads is responsible for counting the number of threads that have been created but not yet running; Idle_threads is responsible for counting the number of idle threads; cur_threads is responsible for counting the number of threads in the current thread pool Note that cur_threads contains threads that have not yet been created in NEW_THREADS.

Thread pool Creation

The first is to create a new thread pool for a particular aiocontext instance through the Thread_pool_new Function. Each member of the ThreadPool data structure is initialized in this function, including the NEW_THREAD_BH responsible for creating the new thread and the COMPLETION_BH that is used to schedule the execution of the task completion callback function after the thread has finished Executing.

Task Submission

We submit the task by invoking the Thread_pool_submit_aio function, which is the prototype of the Function:

BLOCKAIOCB *thread_pool_submit_aio (ThreadPool *pool,

Threadpoolfunc *func, void *arg,

Blockcompletionfunc *cb, void *opaque)

This function creates a threadpoolelement instance for the submitted task to add to threadpool, and calls the Spawn_thread function to create a QEMU thread. The Spawn_thread function creates a qemu thread by dispatching Pool->new_thread_bh.

Thread execution

The created thread executes the worker_thread function, which removes the first threadpoolelement node from the Pool->request_list list, executes its task function, and then dispatches the execution pool->completion _bh; This BH iterates through the Pool->head list, depending on the state of its threadpoolelement member to decide whether to execute the completion callback function registered on the INSTANCE.

After a thread finishes a task, that is, when a threadpoolelement instance is executed, the thread is idle and waits for the next task commit Action. The synchronization between task submission and thread execution is achieved through POOL->SEM. Qemu_sem_post (&pool->sem) is called after the task is committed in Thread_pool_submit_aio to increase the Pool->sem count, Worker_thread in pool-> After waking up on the sem, get the next threadpoolelement node to execute from the Pool->request_list List.

Summarize

Thread pool count provides the ability to reuse threads, which improves performance when QEMU has a lot of Io operations, and also provides AIO implementations other than Linux-aio.

Reference:

Https://developer.gnome.org/glib/2.46/glib-Thread-Pools.html

QEMU Thread Pool Introduction

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.