..
Statement:
This blog welcome forward, but please keep the original author information!
Blog Address: Http://blog.csdn.net/halcyonbaby
Sina Weibo: @ Looking for miracles
The content of my study, research and summary, if there are similarities, it is an honor. Introduction and use of network after Docker1.9
After 1.9, network from experimental characteristics to formal feature release in Docker.
You can see the following new commands from the command line:
[Root@localhost system]# Docker Help network
Usage: docker network [options] COMMAND [options]
Commands:
Create Create a network
connect connect container to a network
disconnect disconnect container from a network< C11/>inspect Display Detailed network information
LS List all networks
RM Remove a network
Run ' docker network command--help ' For more information on a command.
--help=false Print Usage
You can see that 3 networks were created by default after Docker daemon started:
Bridge, NULL, host three kinds of built-in network driver are used respectively.
[Root@localhost system]# docker network LS
network ID NAME DRIVER
f280d6a13422 Bridge Bridge
f5d11bed22a2 none null
18642f53648f host Host
Let's take a closer look at the details of the three networks:
The name is network, and the user can define it arbitrarily.
IDs are network internal UUID, globally unique.
Scope currently has two values "local", "remote", indicating whether it is a native network or a multiple-machine network.
Driver refers to the name of network driver.
Ipam is the driver name and configuration information (which we can see in the Bridge Network) for IP management issues.
Information about containers using this network is recorded in the container.
The options document the various configuration information required by the driver.
[Root@localhost temp]# Docker Network Inspect none [{' Name ': ' None ', ' Id ': ' 1abfa4750ada3be20927c3c1
68468f9a64efd10705d3be8958ae1eef784b28ef ", Scope": "Local", "Driver": "null", "IPAM": { ' Driver ': ' Default ', ' Config ': []}, ' containers ': {}, ' options ': {}}] [root@ localhost temp]# Docker Network inspect host [{"Name": "Host", "Id": "001c9c9047d90efff0b64bf80e49ff 7EC33421374B2C895169A0F9E096EB791D ", Scope": "Local", "Driver": "Host", "IPAM": {"Dr Iver ': ' Default ', ' Config ': []}, ' containers ': {}, ' options ': {}}] [Root@localh OST temp]# Docker network inspect bridge [{' Name ': ' Bridge ', ' Id ': ' 201fbcb64b75977889f5d9c1e88c7563 08a090eb611397dbd0bb5c824d429276 ", Scope": "Local", "Driver": "Bridge", "IPAM": {"Dr
Iver ": Default", "Config": [{"Subnet": "172.17.42.1/16", "Gateway": "172.17 .42.1 "}]}," containers ": {" 4d4d37853115562080613393c6f605a9ec2 b06c3660dfa0ca4e27f2da266773d ": {" EndpointId ":" 09e332644c539cec8a9852a11d402893bc76a5559356817192657b584
0fe2de3 "," MacAddress ":" 02:42:ac:11:00:01 "," ipv4address ":" 172.17.0.1/16 ", ' IPv6Address ': '}}, ' Options ': {' Com.docker.network.bridge.default_bridge ' : "True", "COM.DOCKER.NETWORK.BRIDGE.ENABLE_ICC": "true", "Com.docker.network.bridge.enable_ip_mas Querade ": True", "Com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.br
Idge.name ":" Docker0 "," COM.DOCKER.NETWORK.DRIVER.MTU ":" 1500 "}}]
Various operations of the container network:
-> Create/Add/release/Remove Network
[Root@localhost temp]# Docker network create-d bridge--ip-range=192.168.1.0/24--gateway=192.168.1.1--subnet= 192.168.1.0/24 bridge2
b18f4fb74ebd32b9f67631fd3fd842d09b97c30440efebe254a786d26811cf66
[ Root@localhost temp]# Docker network LS
network ID NAME DRIVER
1abfa4750ada none null
001C9C9047D9 host host
b18f4fb74ebd bridge2 bridge
201fbcb64b75 Bridge Bridge
[Root@localhost temp]# docker exec vim IP addr
driver plugin mechanism and driver plugin realization
Brief Introduction
Docker Plug-in list:
http://docs.docker.com/engine/extend/plugins/
Docker's plugin adopted a out-of-process approach.
This has two advantages, easy to expand, dynamically increase deletion, and the code is completely decoupled from the Docker.
Plugin is a process running on Docker host that registers a file with Docker by placing it in the plugin directory.
Thus found by Docker discovery mechanism.
The plugin name is recommended for shorter lowercase words. The plug-in can be run inside or outside the container and is recommended outside the container. Plugin Directory
There are three kinds of files in the plugin directory:
. sock files are UNIX domain sockets.
Spec files are text files containing a URL, such as Unix:///other.sock.
JSON files are text files containing a full JSON specification for the plugin.
. Sock files are generally placed under/run/docker/plugins; Spec/.json files are generally placed in/etc/docker/plugins
or/usr/lib/docker/plugins.
JSON file Example:
{
"Name": "Plugin-example",
"Addr": "Https://example.com/docker/plugin",
"Tlsconfig": {
" Insecureskipverify ": false,
" Cafile ":"/usr/shared/docker/certs/example-ca.pem ",
" CertFile ":"/usr/ Shared/docker/certs/example-cert.pem ",
" keyfile ":"/usr/shared/docker/certs/example-key.pem ",
}
}
Additional instructions for Plug-ins
The plugin needs to be started before Docker is started, and the Docker daemon must be stopped when the plugin is updated, and then the Docker daemon will be started.
The plug-in is activated the first time it is used. Docker will be found under the plugin directory according to the specified plugin name. (Feel Docker should add an interface, query the list of native plug-ins)
Used between Docker and plugin, the JSON format is based on HTTP RPC messages, and the message type is post.
Handshake message:
/plugin.activate
request:empty body
Response:
{"
Implements": ["Volumedriver"]
}
Plugin Implementation
The main need is to implement the following message:
/plugin.activate
/networkdriver.getcapabilities
/networkdriver.createnetwork
/ Networkdriver.deletenetwork
/networkdriver.createendpoint
/networkdriver.endpointoperinfo
/ Networkdriver.deleteendpoint
/networkdriver.join
/networkdriver.leave
/ Networkdriver.discovernew
/networkdriver.discoverdelete
Detailed reference:
Https://github.com/docker/libnetwork/blob/master/docs/remote.md Libnetwork and Docker call relationships:
Docker Daemon-->libnetwork-–>network Plugin CNM Introduction
Https://github.com/docker/libnetwork/blob/master/docs/design.md
CNM Full name container network Model. This paper mainly defines the network model of Libnetwork. There are three main concepts: network
A group of endpoint that can communicate directly with each other. The usual means of implementing Linux Bridge/ovs and so on. Sandbox
The sandbox contains a container's network stack. Generally contains interface/route/dns settings, and so on. Generally through the namespace implementation.
A sandbox can contain multiple endpoint that belong to different network. Endpoint
Endpoint connects a sandbox to a network.
It is usually possible to use the Linux bridge Veth pair or OvS internal port technology. the main objects of CNM are Networkcontroller
Mainly responsible for the management of driver, provide create network interface. Driver
Provides a network/sandbox/endpoint implementation. Network Endpoint Sandbox Code Analysis Docker Daemon created three networks during daemon initialization
Daemon.go function Newdaemon:
Func newdaemon (config *config, Registryservice *registry. Service) (daemon *daemon, err error) {
...
D.netcontroller, err = d.initnetworkcontroller (config)
If err!= nil {return
nil, FMT. Errorf ("Error Initializing Network controller:%v", err)
}
...
}
Initnetworkcontroller function in Daemon_unix.go/daemon_windows.go, take Unix as an example:
Mainly did the following several things:
Initializes the controller and initializes the Null/host/bridge three built-in networks.
Func (daemon *daemon) initnetworkcontroller (config *config) (libnetwork. Networkcontroller, error) {netoptions, err: = daemon.networkoptions (config) If err!= nil {return nil, E RR} controller, err: = Libnetwork.
New (Netoptions ...) If Err!= nil {return nil, fmt. Errorf ("Error obtaining controller instance:%v", err)}//Initialize default network on "null" if _, err: = Controller. Newnetwork ("null", "none", libnetwork.) Networkoptionpersist (false)); Err!= Nil {return nil, fmt. Errorf ("Error creating default \" Null\ "Network:%v", err)}//Initialize default network on "host" if _, er R: = Controller. Newnetwork ("Host", "host", Libnetwork. Networkoptionpersist (false)); Err!= Nil {return nil, fmt. Errorf ("Error creating default \" Host\ "Network:%v", err)} if!config. Disablebridge {//Initialize default driver "bridge" if err: = Initbridgedriver (Controller, config); err
!= Nil { return nil, err}} return controller, nil}
Creating a network when you start a container
Api/server/router/container/container.go Local.
Newpostroute ("/containers/{name:.*}/start", R.postcontainersstart), Containerstart function called in Postcontainersstart Daemon/start.go func (daemon *daemon) Containerstart (container *container) (err error) {... If err: = Daemon.init Ializenetworking (container);
Err!= nil {return err} ...} Daemon/container_unix.go func (daemon *daemon) initializenetworking (container *container) Error {... If err: = Daemon . allocatenetwork (container);
Err!= nil {return err} ...} Func (daemon *daemon) allocatenetwork (container *container) error {... for n: = Range container.
Networksettings.networks {if err: = Daemon.connecttonetwork (container, N, updatesettings); Err!= Nil {
Return err}} Func (daemon *daemon) connecttonetwork (Container *container, idorname string, updatesettings bool) (err Error) {...//create Endpoint EP, err = N.createendpoint (Endpointname, createoptions..) ...//get sandbox SB: = Daemon.getnetworksandbox (Container) ...//access sandbox if err: = EP. Join (SB);
Err!= nil {return err} ...}
Create Network
is actually calling the Libnetwork Newnetwork interface.
All APIs for network are routed to functions in Daemon/network.go
Daemon/network.go
//createnetwork creates a network with the given name, driver and other optional parameters
C (Daemon *daemon) createnetwork (name, driver string, Ipam network. IPAM, Options Map[string]string) (Libnetwork. Network, error) {
c: = Daemon.netcontroller
If Driver = = "" {
driver = C.config (). Daemon.defaultdriver
}
nwoptions: = []libnetwork. networkoption{}
v4conf, v6conf, err: = Getipamconfig (ipam. Config)
If err!= nil {return
nil, err
}
nwoptions = Append (Nwoptions, libnetwork. Networkoptionipam (Ipam. Driver, "", v4conf, v6conf)
nwoptions = append (Nwoptions, libnetwork. Networkoptiondriveropts (options)) return
c.newnetwork (driver, name, nwoptions ...)
}
Connect Container to Network
Let's look at the Connect container to network.
You can see that the Libnetwork interface is finally called.
Create EP and join Sandbox.
Connectcontainertonetwork connects the given container to the given//network. If either cannot be found, a err is returned.
If the//network cannot be set up, a err is returned. Func (daemon *daemon) connectcontainertonetwork (containername, networkname String) error {container, err: = Daemon. Get (containername) If Err!= nil {return err} return daemon. Connecttonetwork (container, NetworkName)}//Connecttonetwork connects a container to a network func (daemon *daemon) Con Necttonetwork (Container *container, idorname string) error {if!container. Running {return derr. Errorcodenotrunning.withargs (container.id)} if Err: = Daemon.connecttonetwork (container, idorname, true); Err!= nil {return err} if err: = Container.todisklocking (); Err!= Nil {return FMT. Errorf ("Error saving container to disk:%v", err)} return nil} func (daemon *daemon) connecttonetwork (container *container, idorname string, updatesettings bool) (err Error) {if Container.hostConfig.NetworkMode.IsContainer () {return runconfig. Errconflictsharednetwork} if Runconfig. Networkmode (Idorname). Isbridge () && Daemon.configStore.DisableBridge {container. Config.networkdisabled = True Return nil} controller: = Daemon.netcontroller N, err: = Daemon. Findnetwork (idorname) If Err!= nil {return err} if updatesettings {if err: = Daemon.updat Enetworksettings (container, n);
Err!= nil {return err}} EP, Err: = Container.getendpointinnetwork (n) If Err = = Nil { Return FMT. Errorf ("Container already connected to network%s", Idorname)} If _, OK: = Err. (Libnetwork.
Errnosuchendpoint);!ok {return err} createoptions, err: = Container.buildcreateendpointoptions (n) If Err!= nil {return err} endpointname: = Strings. Trimprefix (container. Name, "/") EP, err = N.createendpoint (Endpointname, createoptions ...) If Err!= nil {return err} defer func () {If Err!= nil {if e: = Ep. Delete (); E!= Nil {Logrus. WARNF ("Could not rollback container connection to network%s", Idorname)}} () if err: = Da Emon.updateendpointnetworksettings (Container, N, EP); Err!= nil {return err} SB: = Daemon.getnetworksandbox (container) if SB = = Nil {options, E RR: = Daemon.buildsandboxoptions (container, N) If Err!= nil {return err} SB, err = Controller.
Newsandbox (container.id, Options ...) If Err!= nil {return err} container.updatesandboxnetworksettings (SB)} If Err: = Ep. Join (SB); Err!= nil {return err} if err: = Container.updatejoininfo (n, EP); Err!= Nil {return derr. Errorcodejoininfo.withargs (ERR)} returN Nil}
Other features are similar to the above analysis. libnetwork
Several driver are built into the Libnetwork code, respectively (bridge,null,host,overlay,remote,windows), where bridge,null,host is the common local driver.
Overlay is the Docker new Multi-host network program. Remote can communicate with a third party custom driver plugin.
Refer to the Libnetwork/dirvers/remote/driver.go function, and finally to send the rest message and plugin communication.
...
Func (d *driver) createnetwork (ID string, Options map[string]interface{}, Ipv4data, Ipv6data []driverapi. Ipamdata) Error {
Create: = &api. createnetworkrequest{
Networkid:id,
options: options,
ipv4data: ipv4data,
ipv6data: ipv6data,
}
Return D.call ("Createnetwork", create, &api. createnetworkresponse{})
}
func (d *driver) deletenetwork (Nid string) error {
Delete: = &api. Deletenetworkrequest{networkid:nid} return
D.call ("deletenetwork", delete, &api.) deletenetworkresponse{})
}
...