2.2, Docker-1.12 simple configuration and drive introduction [three]

Source: Internet
Author: User
Tags documentation file copy json memory usage

2. 2, Docker-1.12 simple configuration and drive introduction

In the previous article we have installed Docker. In this chapter, we're going to simply configure the startup parameters for the Docker thing.

Let's start with a few simple introductions to docker storage drivers, and Docker supports centralized storage drivers by default: Devicemapper,aufs,overlay,btrfs, and so on. In the Ubuntu system, the default storage driver is Aufs, which is more suitable for production; The default storage driver on the CentOS series is Devicemapper, which is not currently recommended for production, because the default is to use/dev/ LOOP1 virtual devices to store data, which can lead to data instability and future non-scalable (the official recommendation is to use DIRECT-LVM when using Devicemapper).

/*********** Evil dividing line ************/

I introduce the various storage-driven comparison data:

Aufs:

Divided into multi-layer mirror layer, a read-write layer.

Modify the existing file, there is write delay, need to copy the entire file to the read-write layer

Read file performance is poor, need to search in multiple layers

Delete a file, only need to put a blank file in the read-write layer, the mirror layer file is not deleted


Performance

Support page cache with high memory efficiency

The underlyingmechanics of how AUFS shares files between image layers and containers uses Thesystems page cache very effic iently.

Large file modification has a delay, need to copy the entire file to the read-write layer and then modify

OVERLAYFS:

Divided into two layers, the upper layer and the lower layer, lower layer is called the images layer, the upper layer is the so-called read-write layer.

Modify the existing files (exist at the images layer), small files have little impact, for large files have write delay (whether small files or large files, as long as the modification will be the entire file copy to the read-write layer and then modify)

Read file performance is better than Aufs, because OVERLAYFS has only two layers, all for file search performance is better than aufs

Deleting a file requires only a blank file with the same name in the read-write layer, and the mirror layer file does not delete

Performance

Support page Caching multiple containers access the same file can be shared

Single Pagecache entry efficient with memory and a good the option for PaaS and other highdensity use cases.

To modify the existing files, you need to copy the entire file from the mirror layer to the read-write layer and then modify, for large files have write delay, but performance is better than Aufs, because the search file performance is better than aufs OVERLAYFS only two layers

Significant depletion of the inode may result in the inode being exhausted (it can be solved by allocating a little more inode when the file system is formatted)

Device Mapper:

The first layer is a snapshot, the snapshot is a pointer to the underlying real data

Add a file that causes a block (64KB) to be allocated, and then write data on the block, so there are performance issues when there are lots of small files to write

Modify the file, copy on write, just copy the modified part to the snapshot layer, not like AUFSOVERLAYFS copy the entire file, this is an advantage

Deleting a file simply sets the pointer to true data in the snapshot layer to NULL

Performance

Small file write performance has an impact, each write will cause the allocation of a block (64KB) then write data directly in the block write

The copy on write performance is better than Aufs and OVERLAYFS on average, in the case of a large number of small files to * the same file to access multiple containers that need to be copied multiple times to memory, not applicable in large-density PAAs scenarios, Memory usage is not efficient. Page cache is not supported by OVERLAYFS and AUFS.

Devicemapper IsNot The most memory efficient Docker storage driver. Launching n copies of thesame container loads n copies of its files into memory. This can has a memoryimpact on your Docker host. As a result, the Devicemapper storage driver Maynot is the best choice for PaaS and other high density use cases.

The detailed storage driver official documentation is as follows:

https://docs.docker.com/engine/userguide/storagedriver/selectadriver/#select-a-storage-driver

Performance Comparison Chart


/*********** Evil dividing line ************/

We can manage Docker's services through SYSTEMCTL.

[root@centos7.2~]# Systemctl Enable Docker #自启

[root@centos7.2~]# systemctl Disable Docker #关闭自启

[root@centos7.2~]# systemctl Reload-deamon Docker #重载docker. Service configuration information

[root@centos7.2~]# systemctl start Docker #启动docker

[root@centos7.2~]# systemctl Stop Docker #停止docker

[root@centos7.2~]# systemctl Restart Docker #重启docker (not recommended, unstable, unable to kill during busy time, first stop after start is better)

[root@centos7.2~]# systemctl status Docker #查看docker状态

Next, let's take a little look at the startup parameters of the docker1.12.

[root@centos7.2~]# Vi/lib/systemd/system/docker.service

/*********** Evil dividing line ************/

[Unit]

Description=dockerapplication Container Engine

Documentation=https://docs.docker.com

After=network.target

[Service]

Type=notify

# The default IsNot to use SYSTEMD for cgroups because the delegate issues still

# exists ANDSYSTEMD currently does not support the Cgroup feature set required

# for Containersrun by Docker

Execstart=/usr/bin/dockerd #启动时候执行项

Execreload=/bin/kill-s HUP $MAINPID

# Havingnon-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

Limitnofile=infinity

Limitnproc=infinity

Limitcore=infinity

# Uncommenttasksmax If your systemd version supports it.

# only systemd226 and above support this version.

#TasksMax =infinity

Timeoutstartsec=0

# set Delegateyes so this systemd does not reset the cgroups of Docker containers

Delegate=yes

# kill only Thedocker process, not all processes in the Cgroup

Killmode=process

[Install]

Wantedby=multi-user.target

/*********** Evil dividing line ************/

We only need to configure Execstart this counterpart when it is executed using Systemctl start Docker.

In Docker 1.12, the default Deamon program was changed from/usr/bin/docker to/usr/bin/dockerd, so don't be alarmed if you use the docker1.12 version.

Let's check the details of the configuration parameters below:

/*********** Evil dividing line ************/

[root@centos7.2~]# Dockerd--help (in fact dockerd is equivalent to the previous docker-d)

-B,--Bridge #指定容器使用的网络接口, default to Docker0, and other network interfaces can be specified

--bip #指定桥接地址, a private network that defines a container

--cgroup-parent #为所有的容器指定父cgroup

--cluster-advertise #为集群设定一个地址或者名字

--cluster-store #后端分布式存储的URL

--cluster-store-opt=map[] #设置集群存储参数

--config-file=/etc/docker/daemon.json #指定配置文件

-D #启动debug模式

--default-gateway #为容器设定默认的ipv4网关 (--DEFAULT-GATEWAY-V6)

--dns=[] #设置dns

--dns-opt=[] #设置dns参数

--dns-search=[] #设置dns域

--exec-opt=[] #运行时附加参数

--exec-root=/var/run/docker #设置运行状态文件存储目录

--FIXED-CIDR #为ipv4子网绑定ip

-G,--Group=docker #设置docker运行时的属组

-G,--Graph=/var/lib/docker #设置docker运行时的家目录

-H,--host=[] #设置docker程序启动后套接字连接地址

--icc=true #是内部容器可以互相通信, you need to disable internal container access in your environment

--insecure-registry=[] #设置内部私有注册中心地址

--ip=0.0.0.0 #当映射容器端口的时候默认的ip (This should be useful in a multi-host network)

--ip-forward=true #使net. Ipv4.ip_forward in effect, is actually inside the kernel forward

--ip-masq=true #启用ip伪装技术 (Container access external program does not expose its own IP by default)

--iptables=true #启用容器使用iptables规则

-L,--log-level=info #设置日志级别

--label=[] #设置key =value as a daemon label

--live-restore #启用热启动 (restart Docker to keep the container running 1.12 new features)

--log-driver=json-file #容器日志默认的驱动

--log-opt=map[] #容器默认的日志驱动选项

--max-concurrent-downloads=3 #为每个pull设置最大并发下载

--max-concurrent-uploads=5 #为每个push设置最大并发上传

--MTU #设置容器网络的MTU

--oom-score-adjust=-500 #设置内存oom的平分策略 ( -1000/1000)

-P,--pidfile=/var/run/docker.pid #指定pid所在位置

-S,--storage-driver #设置docker存储驱动

--selinux-enabled #启用selinux的支持

--storage-opt=[] #存储参数驱动

--swarm-default-advertise-addr #设置swarm默认的node节点

--tls #使用tls加密

--TLSCACERT=~/.DOCKER/CA.PEM #配置tls CA Certification

--tlscert=~/.docker/cert.pem #指定认证文件

--tlskey=~/.docker/key.pem #指定认证keys

--userland-proxy=true #为回环接口使用用户代理

--userns-remap #为用户态的namespaces设定用户或组

-V, –version #查看docker版本

/*********** Evil dividing line ************/

In the absence of special requirements, many of our startup configuration parameters are not required. So I put my own configuration parameters out to everyone to see.

Execstart=/usr/bin/dockerd--label com.exmple.storage=server1--graph=/server/docker-h tcp://0.0.0.0:5257-hunix:// /var/run/docker.sock--pidfile=/var/run/docker.pid

--label Com.exmple.storage=server1 #将我的docker的守护进程贴上一个标签 (later on, it is used to start the container on the specified Docker server that contains the tag, for late-load multi-server Need to assign container to specified server is useful)

-H tcp://0.0.0.0:5257 #将我的docker守护进程指定一个监听端口

-H Unix:///var/run/docker.sock #将我的docker守护进程指定一个sock位置

--graph=/server/docker #指定我的docker存储目录

--pidfile=/var/run/docker.pid #指定docker守护进程pid文件目录

When the above parameters are configured, we can start Docker.

[root@centos7.2~]# Systemctl Stop Docker

[root@centos7.2~]# systemctl Reload-deamon Docker #这步是必须的, reload the configuration of the Docker.service file, otherwise the restart is also the configuration prior to startup.

[root@centos7.2~]# Systemctl start Docker

At this point, we have finished configuring and driving the information.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.