Architecture and mechanism of BIND9 Note 1

Source: Internet
Author: User
Tags epoll

BIND9 uses an event-driven mechanism to work, and the source of the event is the Io,io epoll edge triggering mode used in Linux.

This article is about EPOLL,BIND9. If you create a watcher thread (macro Use_watcher_thread control), here's a discussion of the wired path, in fact, even if you don't create

The line Cheng Gan is also the same work. Setup_watcher function in lib/isc/socket.c: (All the code is the fragment under the Epoll of the Intercept, because there is also the implementation code of the Kqueue,devpoll,select, too many)

#elifDefined (Use_epoll)Manager->nevents =isc_socket_maxevents; Manager->events = Isc_mem_get (Mctx,sizeof(structEpoll_event) *Manager-nevents); if(Manager->events = =NULL)return(isc_r_nomemory); Manager->EPOLL_FD = Epoll_create (manager->nevents); if(MANAGER->EPOLL_FD = =-1) {result=Isc__errno2result (errno); Isc__strerror (errno, Strbuf,sizeof(STRBUF)); Unexpected_error (__file__, __line__,"epoll_create%s:%s", Isc_msgcat_get (Isc_msgcat, Isc_msgset_general, isc_msg_failed,"failed"), strbuf); Isc_mem_put (MCTX, manager-events,sizeof(structepoll_event) * manager->nevents); return(result); } #ifdef use_watcher_thread Result= WATCH_FD (Manager, manager->pipe_fds[0], select_poke_read); if(Result! =isc_r_success) {Close (Manager-epoll_fd); Isc_mem_put (MCTX, manager-events,sizeof(structepoll_event) * manager->nevents); return(result); }#endif/* Use_watcher_thread */
View Code

First create an array of epoll_event structures that correspond to the maximum number of socket FD to monitor (manager->nevents), and then call the Epoll_create function to create a epoll fd, parameter is the specified monitored socket FD

Maximum number.  My kernel version is 3.13,man. Epoll_create found it to say: Epoll_create () creates an epoll (7) instance. Since Linux 2.6.8, thesize argument is ignored, but must be greater than zero. This function ignores the parameter size after the 2.6.8 kernel, but the parameter value passed must be greater than 0. Later found a bit of information, the online master of the blog said is very clear http://www.cnblogs.com/apprentice89/p/3234677.html. Continue to say, the back of the watch_fd really create a thread, is the pipe_fds[0] This pipeline descriptor, which is a readable stream, and the above-mentioned socket FD can be classified as a stream. Implementation code for the WATCH_FD:

#elifDefined (Use_epoll)structEpoll_eventEvent; if(msg = =select_poke_read)Event. Events =Epollin; Else                Event. Events =epollout; memset (&Event. Data,0,sizeof(Event. Data)); Event. DATA.FD =FD; if(Epoll_ctl (MANAGER->EPOLL_FD, Epoll_ctl_add, FD, &Event) == -1&&errno!=eexist) {Result=Isc__errno2result (errno); }        return(result);
View Code

This is the pipe_fds[0] to join the EPOLL_FD listening queue, Epoll_ctl_add is the type of operation, register the FD to EPOLL_FD. The purpose of this pipeline is to receive messages that manage the thread, such as thread exits.

So go into the thread look:

StaticIsc_threadresult_twatcher (void*UAP) {isc__socketmgr_t*manager =UAP;    Isc_boolean_t done; intCTLFD; intcc; #ifdef use_kqueueConst Char*fnname ="kevent ()";#elifDefined (Use_epoll)Const Char*fnname ="epoll_wait ()";#elifDefined (Use_devpoll)Const Char*fnname ="IOCTL (Dp_poll)"; structDvpoll DVP;#elifDefined (use_select)Const Char*fnname ="Select ()"; intmaxfd;#endif    Charstrbuf[isc_strerrorsize]; #ifdef isc_socket_use_pollwatch pollstate_t pollstate=Poll_idle;#endif    /** Get the control FD here.     This would never change. */CTLFD= manager->pipe_fds[0]; Done=Isc_false;  while(!Done ) {         Do{#ifdef use_kqueue cc= Kevent (MANAGER-&GT;KQUEUE_FD, NULL,0, the manager->events, manager->nevents, NULL);#elifDefined (Use_epoll)cc= Epoll_wait (MANAGER-&GT;EPOLL_FD, manager->events, manager->nevents,-1);#elifDefined (Use_devpoll)Dvp.dp_fds= manager->events; Dvp.dp_nfds= manager->nevents, #ifndef isc_socket_use_pollwatch dvp.dp_timeout= -1;#else            if(Pollstate = =poll_idle) Dvp.dp_timeout= -1; ElseDvp.dp_timeout=isc_socket_pollwatch_timeout;#endif/* Isc_socket_use_pollwatch */cc= IOCTL (MANAGER-&GT;DEVPOLL_FD, Dp_poll, &DVP);#elifDefined (use_select)LOCK (&manager->Lock); memcpy (Manager->read_fds_copy, manager->Read_fds, manager-fd_bufsize); memcpy (Manager->write_fds_copy, manager->Write_fds, manager-fd_bufsize); Maxfd= Manager->maxfd +1; UNLOCK (&manager->Lock); CC=Select(Maxfd, manager->read_fds_copy, manager-write_fds_copy, NULL, NULL);#endif/* Use_kqueue */if(CC <0&&!Soft_error (errno)) {Isc__strerror (errno, Strbuf,sizeof(STRBUF)); Fatal_error (__file__, __line__,"%s%s:%s", FnName, Isc_msgcat_get (Isc_msgcat, Isc_msgset_general, Isc_msg_failed,"failed"), strbuf); }#ifDefined (use_devpoll) && defined (isc_socket_use_pollwatch)if(CC = =0) {                if(Pollstate = =poll_active) Pollstate=poll_checking; Else if(Pollstate = =poll_checking) Pollstate=Poll_idle; } Else if(CC >0) {                if(Pollstate = =poll_checking) {                    /** xxx:we ' d like-to-use a more * verbose log level as it's actually an                     * Unexpected event, but the kernel bug * reportedly happens pretty frequently                     * (and it can also is a false positive) * So it would is just too noisy. */Manager_log (manager, Isc_logcategory_general, Isc_logmodule_socket, Isc_log_debug (1),                            "Unexpected POLL timeout"); } pollstate=poll_active; }#endif        }  while(CC <0);#ifDefined (use_kqueue) | | Defined (use_epoll) | | Defined (Use_devpoll) Done= Process_fds (Manager, manager->events, CC);#elifDefined (use_select)Process_fds (Manager, MAXFD, manager-read_fds_copy, manager-write_fds_copy); /** Process reads on internal, control FD. */        if(Fd_isset (CTLFD, manager->read_fds_copy)) Done=PROCESS_CTLFD (manager);#endif} manager_log (Manager, TRACE,"%s", Isc_msgcat_get (Isc_msgcat, Isc_msgset_general, isc_msg_exiting,"Watcher Exiting")); return((isc_threadresult_t)0);}
View Code

Infinite Loop, epoll_wait the corresponding socket FD and event is placed in the events array when an IO event occurs on the EPOLL_FD queue of the listener, and the corresponding event of the socket FD registered on EPOLL_FD is emptied.

Process_fds iterates through the array, finds the corresponding socket FD, and determines if the FD is a thread-controlled pipeline, and then processes the control messages in the pipeline after executing the corresponding events on the other socket FD.

StaticIsc_boolean_tprocess_fds (isc__socketmgr_t*manager,structEpoll_event *events,intnevents) {    inti; isc_boolean_t Done=Isc_false, #ifdef use_watcher_thread isc_boolean_t have_ctlevent=Isc_false;#endif    if(Nevents = = manager->nevents) {Manager_log (manager, Isc_logcategory_general, Isc_logmodule_socket, Isc_log_info, /c10>"Maximum number of FD events (%d) received", nevents); }     for(i =0; i < nevents; i++) {REQUIRE (events[i].data.fd< (int) manager->maxsocks); #ifdef Use_watcher_threadif(EVENTS[I].DATA.FD = = manager->pipe_fds[0]) {have_ctlevent=isc_true; Continue; }#endif        if((Events[i].events & epollerr)! =0||(events[i].events& epollhup)! =0) {            /** Epoll does not set in/out bits on a erroneous * condition, so we need to try both anyway.  This was a * bit inefficient, but should was okay for such rare * events.             Note also that the read or write attempt * won ' t block because we use non-blocking sockets. */events[i].events|= (Epollin |epollout); } process_fd (Manager, EVENTS[I].DATA.FD, (events[i].events& Epollin)! =0, (events[i].events& epollout)! =0); } #ifdef Use_watcher_threadif(have_ctlevent) done=PROCESS_CTLFD (manager);#endif    return(done);}
View Code

Cond

Architecture and mechanism of BIND9 Note 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.