This is a creation in Article, where the information may have evolved or changed.
Engine
A struct has a eventsMonitor
member:
type Engine struct { ...... eventsMonitor *EventsMonitor}
EventsMonitor
The structure is defined as follows:
//EventsMonitor monitors eventstype EventsMonitor struct { stopChan chan struct{} cli client.APIClient handler func(msg events.Message) error}
stopChan
Used to notify the stop-Accept message cli
, the underlying connection client
, and the handler
received event
handler function.
Engine.ConnectWithClient
Method assigns a eventsMonitor
value to a member:
// ConnectWithClient is exportedfunc (e *Engine) ConnectWithClient(client dockerclient.Client, apiClient engineapi.APIClient) error { e.client = client e.apiClient = apiClient e.eventsMonitor = NewEventsMonitor(e.apiClient, e.handler) // Fetch the engine labels. if err := e.updateSpecs(); err != nil { return err } e.StartMonitorEvents() // Force a state update before returning. if err := e.RefreshContainers(true); err != nil { return err } if err := e.RefreshImages(); err != nil { return err } // Do not check error as older daemon does't support this call. e.RefreshVolumes() e.RefreshNetworks() e.emitEvent("engine_connect") return nil}
Where the Engine.StartMonitorEvents
code is as follows:
// StartMonitorEvents monitors events from the enginefunc (e *Engine) StartMonitorEvents() { log.WithFields(log.Fields{"name": e.Name, "id": e.ID}).Debug("Start monitoring events") ec := make(chan error) e.eventsMonitor.Start(ec) go func() { if err := <-ec; err != nil { if !strings.Contains(err.Error(), "EOF") { // failing node reconnect should use back-off strategy <-e.refreshDelayer.Wait(e.getFailureCount()) } e.StartMonitorEvents() } close(ec) }()}
Engine.StartMonitorEvents
That is, if the ec channel
message is received from, if it is an error, it will continue to cycle up Engine.StartMonitorEvents
.
EventsMonitor.Start
The function code is as follows:
Start starts the Eventsmonitorfunc (EM *eventsmonitor) Start (EC Chan error) {Em.stopchan = make (chan struct{}) r Esponsebody, err: = em.cli.Events (context. Background (), types. eventsoptions{}) If err! = Nil {EC <-err return} Resultchan: = Make (chan decodingresult) g o func () {dec: = json. Newdecoder (Responsebody) for {var result decodingresult Result.err = Dec. Decode (&result.msg) Resultchan <-result if Result.err = io. EOF {Break}} close (Resultchan)} () go func () {defer responsebody. Close () for {select {case <-em.stopchan:ec <-nil Retu RN Case Result: = <-resultchan:if Result.err! = Nil {EC <-Result.err return} If Err: = Em.handler (result.msg); Err! = Nil { EC <-Err Return}}}} ()}
The code logic is actually HTTP GET /events
a response that emits a "" request and then waits Docker Engine
. Because the HTTP
request is likely to be blocked here, the subsequent HTTP
message interaction will reestablish a HTTP
connection. Here's how it works:
type Response struct { ...... // Body represents the response body. // // The http Client and Transport guarantee that Body is always // non-nil, even on responses without a body or responses with // a zero-length body. It is the caller's responsibility to // close Body. The default HTTP client's Transport does not // attempt to reuse HTTP/1.0 or HTTP/1.1 TCP connections // ("keep-alive") unless the Body is read to completion and is // closed. // // The Body is automatically dechunked if the server replied // with a "chunked" Transfer-Encoding. Body io.ReadCloser ......}
If you want to stop this EventsMonitor
, you can use the Engine.Stop
method:
// Stop stops the EventsMonitorfunc (em *EventsMonitor) Stop() { if em.stopChan == nil { return } close(em.stopChan)}