These two days put the alarm platform in the Docker inside ran, but the host itself performance is not good, so cause MongoDB to hang several times. This time had a good server, although opentstack inside the host, but the IOPS is very nice.
It is convenient to migrate the program inside the Docker, as before, it requires redeploying the environment and static files. Put in Docker, just need to export backup package, SCP, rsync migrate to other servers on it.
My side of the Redis and MongoDB in different containers. Nonsense not much to say, began to migrate ...
Find the container ID that is running ~
root@dev-ops:~# Docker Ps-a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
459e57c9a5d9 rastasheep/ubuntu-sshd:latest/bin/bash about a hour ago up minutes 22/tcp Compassionate_ptolemy
70c74ebbfac4 rastasheep/ubuntu-sshd:14.04/usr/sbin/sshd-d about a hour ago up about a hour 0.0.0.0:49 157->22/tcp s
3ebbc244c486 rastasheep/ubuntu-sshd:latest/usr/sbin/sshd-d hours ago up about a hour 22/tcp Redis_t2
Ed7887b93aa4 rastasheep/ubuntu-sshd:latest/usr/sbin/sshd-d hours ago up about a hour 0.0.0.0:49 153->22/tcp Redis_test
root@dev-ops:~#
root@dev-ops:~# Docker Export 70C74EBBFAC4 >ubuntu_sshd.tar
root@dev-ops:~#
root@dev-ops:~#
root@dev-ops:~# Du-sh Ubuntu_sshd.tar
353M Ubuntu_sshd.tar
Then upload the Ubuntu_sshd.tar to another server.
root@31-53:~# Cat Ubuntu_sshd.tar | sudo docker import-niubi:latest
8f2baf1b1cf479e366524007faad6d2e2671fc693716043a4812556bc8ac9204
root@31-53:~#
Originally just want to put the program, MongoDB, redis migration past. Now that you are migrating, simply migrate all the images to the new node.
Docer export corresponds to the imported command is Cat XXX | Docker Import-name. I'm using niubi:latest ...
Cat Ubuntu_sshd.tar | sudo docker import-niubi:latest
The way above is to use Docker export. Export is the current state, Docker save is for mirror images.
The main difference is that save is able to roll back the previous configuration. Export is just the current.
We saw his historical record through Docker images--tree. Docker's file system Aufs, an "incremental file system" in which user modifications are saved incrementally, so that they can see these historical increments.
Let's look at the backup effect with save. Is 1.1G, which contains those records. When we used the export test just now, we would find that the file was only about 300M.
root@dev-ops:~# Docker Save Rastasheep/ubuntu-sshd >ubuntu_sshd.tar
root@dev-ops:~#
root@dev-ops:~#
root@dev-ops:~# Du-sh Ubuntu_sshd.tar
1.1G Ubuntu_sshd.tar
root@dev-ops:~#
I estimate if there are distributed file systems, such as MFS,NFS. You can better try a docker data volume to make an association within a local folder and container. In this way, backups are more customizable. After all, the environment this thing will not change, only to change the data, and then file directory in the distributed file, you can do better migration. As long as there is an environment to start, Directory One association can be.
sudo docker run--volumes-from dbdata-v $ (PWD):/backup Ubuntu tar cvf/backup/backup.tar/dbdata
Backup of the way to migrate their own choice, recommended with export, after all, save too big, for the history of no use!
For a deeper focus on data security, you can use Docker volumes such a data map.