This is a creation in Article, where the information may have evolved or changed.
manager.localserver
is created locally Unix socket
, and is used to wait for processing swarmctl
of the Sent command request (source code in the manager/controlapi
directory). The Manager.Run()
relevant code in the function is localserver
as follows:
Basecontrolapi: = Controlapi. NewServer (M.raftnode.memorystore (), M.raftnode, M.CONFIG.SECURITYCONFIG.ROOTCA ()) ... proxyopts: = []grpc. dialoption{Grpc. Withbackoffmaxdelay (time. Second), Grpc. Withtransportcredentials (m.config.securityconfig.clienttlscreds),}cs: = Raftpicker. Newconnselector (M.raftnode, proxyopts ...) M.connselector = cs......//Localproxycontrolapi is a special kind of proxy. It is only wired up//to receive requests from a trusted local socket, and these requests//don't use TLS, therefore the R Equests It handles locally should//bypass authorization. When it proxies, it sends them as requests from//this manager rather than forwarded requests (it had no tls//information To put in the metadata map). Forwardasownrequest: = Func (CTX context. Context) (context. Context, error) {return CTX, nil}localproxycontrolapi: = API. Newraftproxycontrolserver (Basecontrolapi, CS, M.raftnode, forwardasownrequest) ... API. Registercontrolserver (M.localserver, Localproxycontrolapi)
(1) First look at controlapi.Server
and controlapi.NewServer()
the definition:
// Server is the Cluster API gRPC server.type Server struct { store *store.MemoryStore raft *raft.Node rootCA *ca.RootCA}// NewServer creates a Cluster API server.func NewServer(store *store.MemoryStore, raft *raft.Node, rootCA *ca.RootCA) *Server { return &Server{ store: store, raft: raft, rootCA: rootCA, }}
controlapi.NewServer()
The function is used to create a swarmctl
command request issued by a responder control
server
.
store.MemoryStore
This is a very important structure:
// MemoryStore is a concurrency-safe, in-memory implementation of the Store// interface.type MemoryStore struct { // updateLock must be held during an update transaction. updateLock sync.Mutex memDB *memdb.MemDB queue *watch.Queue proposer state.Proposer}
And the watch.Queue
definition is as follows:
// Queue is the structure used to publish events and watch for them.type Queue struct { broadcast *events.Broadcaster}......// Watch returns a channel which will receive all items published to the// queue from this point, until cancel is called.func (q *Queue) Watch() (eventq chan events.Event, cancel func()) { return q.CallbackWatch(nil)}......// Publish adds an item to the queue.func (q *Queue) Publish(item events.Event) { q.broadcast.Write(item)}
Simply put, when Server.store
a change occurs, the data is updated to memDB
the same time, the message must be sent queue
, so that the manager
listener channel
goroutine
can receive and process the request accordingly.
The following code fills in the current cluster
information into the newly created controlapi.Server
variable:
baseControlAPI := controlapi.NewServer(m.RaftNode.MemoryStore(), m.RaftNode, m.config.SecurityConfig.RootCA())
(2)
proxyOpts := []grpc.DialOption{ grpc.WithBackoffMaxDelay(time.Second), grpc.WithTransportCredentials(m.config.SecurityConfig.ClientTLSCreds),}cs := raftpicker.NewConnSelector(m.RaftNode, proxyOpts...)m.connSelector = cs......// localProxyControlAPI is a special kind of proxy. It is only wired up// to receive requests from a trusted local socket, and these requests// don't use TLS, therefore the requests it handles locally should// bypass authorization. When it proxies, it sends them as requests from// this manager rather than forwarded requests (it has no TLS// information to put in the metadata map).forwardAsOwnRequest := func(ctx context.Context) (context.Context, error) { return ctx, nil }localProxyControlAPI := api.NewRaftProxyControlServer(baseControlAPI, cs, m.RaftNode, forwardAsOwnRequest)
The code above creates a raftProxyControlServer
variable of a type:
type raftProxyControlServer struct { local ControlServer connSelector *raftpicker.ConnSelector cluster raftpicker.RaftCluster ctxMods []func(context.Context) (context.Context, error)}
localProxyControlAPI
The implication is that if the swarmctl
request manager
is received leader
( swarmctl
and manager
of course on the same machine), the request will be processed or forwarded to cluster
this leader
.
(3)
api.RegisterControlServer(m.localserver, localProxyControlAPI)
The code above is the localserver
corresponding Unix socket
and raftProxyControlServer
related.