ZMQ Note II: IO Threads and poller_t

Source: Internet
Author: User
Tags readable

int major, minor, Patch;
Zmq_version (&major, &minor, &patch); 4.2.0

This article is mainly to analyze the code to facilitate their future inspection.

=========================================

In the previous article, the io_thread_t thread loop function was actually called, based on the member function loop () of the preferred I/O multiplexing (select_t/poll_t/epoll_t/kqueue_t) under different platforms.

How to determine which I/O multiplexing to use, as determined by some precompiled macros, see the POLLER.HPP header file.

This article is for analysis under the Windows platform. Select_t is chosen under Windows, not IOCP.

1. I/O threads

Io_thread_t has three member variables:

          I/O thread accesses incoming commands via this mailbox.        mailbox_t mailbox; Receive the command message of the mailbox, mailbox related information will be introduced in the following article. When you need to communicate with io_thread_t, send a command_t command to its mailbox        //  Handle associated with mailbox ' file descriptor.        poller_t::handle_t Mailbox_handle; The handle to the mailbox binding        //  I/O multiplexing is performed using a Poller object.        poller_t *poller; Selected I/O multiplexing

io_thread_t the function of this class is very concise, the main operations are: thread-on, thread-end, processing in the Mailbox command queue message (in_events function). and mailbox has the message to be processed, is through the mailbox the FD state readable to carry on the notice, This FD is the member variable mailbox_handle of io_thread_t.

zmq::io_thread_t::io_thread_t (ctx_t *ctx_, uint32_t tid_):    object_t (Ctx_, tid_) {    poller = new (Std::nothrow) poller_t (*ctx_);    Alloc_assert (poller);    Mailbox_handle = POLLER->ADD_FD (MAILBOX.GET_FD (), this);    Poller->set_pollin (Mailbox_handle);//The set of readable FD that is added to poller_t during initialization.}

2. poller_t

The poller_t is actually a typedef type:

typedef select_t poller_t;

typedef epoll_t poller_t;

...

It is based on the preferred I/O multiplexing (select_t/poll_t/epoll_t/kqueue_t) under different platforms. This article only analyzes select_t.

For select_t, socket collections are managed differently under Windows and Linux platforms, but the principle is similar. In select_t there are two platform-independent, generic structures:

//Internal state.            struct fds_set_t//select_t to the event collection management of the FD of Interest {fds_set_t ();            fds_set_t (const fds_set_t& other_);            fds_set_t& operator= (const fds_set_t& other_);            Convinient method to descriptor from all sets.            void remove_fd (const fd_t& fd_);            Fd_set Read;            Fd_set write;        Fd_set error;        };            struct fd_entry_t///FD corresponds to an event-handling object (which can be understood as an object that implements the I_poll_events-related interface and requires an IO thread to handle the event) {fd_t fd;        zmq::i_poll_events* events;        }; typedef std::vector<fd_entry_t> fd_entries_t; 
#if defined zmq_have_windows ...        #else        fd_entries_t fd_entries;//fd corresponding set of event objects fds_set_t        fds_set; Select system calls all of the interested FD sets        fd_t maxfd;//select system call requires the maximum FD value        bool retired;//Whether you need to remove the FD corresponding event object, if true, from Fd_ Entries delete the event object corresponding to the FD #endif

poller_t also has a thread_t member variable worker, which is the wrapper (Io_thread_t.poller->worker) of the system thread. When the worker thread is turned on, the Poller_t:loop () function is actually executed.

          Handle of the physical thread doing the I/O work.        thread_t worker;

poller_t is interested in some event objects (implementing the I_poll_events:in_events Interface), taking FD as key and adding the set fd_entries_t. At the same time, the FD is added to the Fds_set.error set, and the error event of FD is monitored.

zmq::select_t::handle_t zmq::select_t::add_fd (fd_t fd_, i_poll_events *events_) {    fd_entry_t fd_entry;    FD_ENTRY.FD = Fd_;    fd_entry.events = events_; #if defined zmq_have_windows ...   #else    fd_entries.push_back (fd_entry);    Fd_set (Fd_, &fds_set.error);    if (Fd_ > Maxfd)        maxfd = fd_, #endif    adjust_load (1),//fd quantity adjustment, atomic increment/decrement operation    return fd_;}

It should be noted that ADD_FD did not put FD into Fds_set.read and Fds_set.write, that is to say, ADD_FD added FD can not be heard read and write events by the Select Supervisor.

However, when the FD is deleted, the information associated with the FD is removed from the poller_t.

void Zmq::select_t::rm_fd (handle_t handle_) {#if defined zmq_have_windows    ... #else    fd_entries_t:: Iterator fd_entry_it;    for (Fd_entry_it = Fd_entries.begin ();          Fd_entry_it! = Fd_entries.end (); ++FD_ENTRY_IT)        if (fd_entry_it->fd = = handle_)//traversal set find target element break            ;    Zmq_assert (Fd_entry_it! = Fd_entries.end ());    FD_ENTRY_IT->FD = RETIRED_FD; The tag is set to remove, notice that the target element is not immediately removed from the vector, but instead marks retired to true, and then uniformly remove after the select system call is complete.    FDS_SET.REMOVE_FD (Handle_); Remove    The IF (Handle_ = = maxfd) {//update the maximum FD value        maxfd = retired_fd from the Select collection;        for (Fd_entry_it = Fd_entries.begin (); Fd_entry_it! = Fd_entries.end ();              ++FD_ENTRY_IT)            if (fd_entry_it->fd > Maxfd)                maxfd = fd_entry_it->fd;    }    Retired = true; Mark as need to delete #endif    adjust_load ( -1);//fd Quantity Adjustment}

poller_t Read and write monitoring of FD is operated by these functions:

        void Set_pollin (handle_t handle_); Monitoring FD readable State        void Reset_pollin (handle_t handle_);//Remove FD listener readable        void Set_pollout (handle_t handle_);//monitor FD writable State        void Reset_pollout (handle_t handle_);//Remove FD listener writable

poller_t inherits from the poller_base_t, which contains the timer set:

          Clock instance Private to this I/O thread.        clock_t clock;  List of active timers.        struct timer_info_t        {            zmq::i_poll_events *sink;            int id;        };        typedef std::multimap <uint64_t, timer_info_t> timers_t;        timers_t timers;

Timer set timers_t is to use Std:multimap container, can guarantee timer's repetition key value and order. Handling the Timer event is also concise, starting with the element of the minimum time value and comparing it to the current timestamp, which is greater than the current time, which is the timer time. The Timer event handler is executed and removed from the timer collection after processing.

uint64_t ZMQ::p oller_base_t::execute_timers () {    //  Fast track.    if (Timers.empty ())        return 0;    Get the current time  .    uint64_t current = Clock.now_ms ();   Execute The timers that is already due.    Timers_t::iterator it = Timers.begin ();    while (It! = Timers.end ()) {        if (It->first > Current)            return it->first-current;  Trigger the timer.        It->second.sink->timer_event (it->second.id);  Remove it from the list of active timers.        Timers_t::iterator o = it;        ++it;        Timers.erase (o);    }  There is no more timers.    return 0;}

3. Looping functions for I/O threads

Three things to do in the loop:

1. Execute the registered timer

2. Select the FD collection for the read/write/error of Fds_set and handle the events that occur for each FD.

3. Remove the event object that has been marked as RETIRED_FD from the event collection fd_entries.

void Zmq::select_t::loop () {while (!stopping) {//Execute any due timers.        int timeout = (int) execute_timers ();        int rc = 0; #if defined zmq_have_windows ... #else fds_set_t local_fds_set = Fds_set; rc = Select (Maxfd + 1, &local_fds_set.read, &local_fds_set.write, &local_fds_set.error, timeout?)        &tv:null);            if (rc = =-1) {Errno_assert (errno = = eintr);        Continue        }//Size is cached to avoid iteration through just added descriptors. for (Fd_entries_t::size_type i = 0, size = fd_entries.size (), I < size && rc > 0; ++i) {Fd_entr            y_t& fd_entry = fd_entries [i];                ... if (Fd_isset (FD_ENTRY.FD, &local_fds_set.read)) {fd_entry.events->in_event ();            --RC; } ... if (Fd_isset (FD_ENTRY.FD, &local_fds_set.write)) {fd_entry.events->out_event ();            --RC; }
... if (Fd_isset (FD_ENTRY.FD, &local_fds_set.error)) {fd_entry.events->in_event (); --RC; }} if (retired) {//wait for Select to return and process the event of FD, and then uniformly remove the element marked as RETIRED_FD from the Fd_entries collection retired = false; Fd_entries.erase (Std::remove_if (Fd_entries.begin (), Fd_entries.end (), is_retired_fd), fd_entr Ies.end ()); } #endif}}

  

ZMQ Note II: IO Threads and poller_t

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.