Cross-server migration of docker containers: Export and save

Source: Internet
Author: User
Tags docker ps


Frontier:

The alarm platform was run in docker these two days, but the performance of the host machine itself is not good, so MongoDB has been suspended several times. This time I built a powerful server. Although it is a host in opentstack, The iops is very good.


Thanks to Xiang Jun For your help. Otherwise, I will be able to upgrade the UEK kernel.


It is very convenient for your program to be migrated in docker. As before, you need to redeploy the environment and static files. To put it in docker, you only need to export the backup package and then migrate SCP and rsync to another server.


Redis and MongoDB on my side are divided into different containers. Let's just talk about it. Start migration...


650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/45/F6/wKioL1PteiSRKL3UAAInWURdfNA788.jpg "Title =" 11:04:57 screen .png "alt =" wkiol1pteisrkl3uaainwurdfna788.jpg "/>

Find the running container ID ~


Original article: http://rfyiamcool.blog.51cto.com/1030776/1540414


[Email protected]: ~ # Docker PS-

CONTAINER ID        IMAGE                           COMMAND             CREATED             STATUS              PORTS                   NAMES459e57c9a5d9        rastasheep/ubuntu-sshd:latest   /bin/bash           About an hour ago   Up 45 minutes       22/tcp                  compassionate_ptolemy   70c74ebbfac4        rastasheep/ubuntu-sshd:14.04    /usr/sbin/sshd -D   About an hour ago   Up About an hour    0.0.0.0:49157->22/tcp   s                       3ebbc244c486        rastasheep/ubuntu-sshd:latest   /usr/sbin/sshd -D   18 hours ago        Up About an hour    22/tcp                  redis_t2                ed7887b93aa4        rastasheep/ubuntu-sshd:latest   /usr/sbin/sshd -D   19 hours ago        Up About an hour    0.0.0.0:49153->22/tcp   redis_test


[Email protected]: ~ #

[Email protected]: ~ # Docker export 70c74ebbfac4> ubuntu_sshd.tar


[Email protected]: ~ #

[Email protected]: ~ #

[Email protected]: ~ # Du-SH ubuntu_sshd.tar

353Mubuntu_sshd.tar


Original article: http://rfyiamcool.blog.51cto.com/1030776/1540414


650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/45/F6/wKioL1PtedigYXweAANvTyGZ3mc013.jpg "Title =" docker1.jpg "alt =" wkiol1ptedigyxweaanvtygz3mc013.jpg "/>



Then upload the ubuntu_sshd.tar to another server.

r[email protected]:~# cat ubuntu_sshd.tar | sudo docker import - niubi:latest8f2baf1b1cf479e366524007faad6d2e2671fc693716043a4812556bc8ac9204[email protected]:~#


I just wanted to migrate the program, MongoDB, and redis. Since the migration, we can simply migrate all images to the new node.


The import command for docer export is cat XXX | docker import-name. Here I use niubi: Latest ......


cat ubuntu_sshd.tar | sudo docker import - niubi:latest


650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/45/F7/wKiom1PtfvLDF0X6AALu086lXxQ054.jpg "Title =" ssss1.jpg "alt =" wkiom1ptfvldf0x6aalu086lxxq054.jpg "/>

The above method is to use docker export. The export is the current status, and the docker save is for image images.

The main difference is that save can roll back the previous configuration. Export is only the current one.


We can see its history through docker images -- tree. Docker's file system, aufs, is an "incremental file system". modifications made by users are saved in incremental mode, so you can see these historical increments.

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/45/F9/wKioL1PtgVbjwvc4AAbx0ieirdg850.jpg "Title =" d_2.jpg "alt =" wkiol1ptgvb1_vc4aabx0ieirdg850.jpg "/>


Let's use save to check the backup effect. It is 1.1 GB, which contains those records. When we used the export test, we found that the file was only about MB.

[email protected]:~# docker save rastasheep/ubuntu-sshd >ubuntu_sshd.tar[email protected]:~# [email protected]:~# [email protected]:~# du -sh ubuntu_sshd.tar 1.1Gubuntu_sshd.tar[email protected]:~#


650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/45/F8/wKiom1PtgJjR6VlAAAEWQrB6zJI161.jpg "Title =" 11:36:39 screen .png "alt =" wkiom1ptgjjr6vlaaaewqrb6zji161.jpg "/>



I guess if there is a distributed file system, such as mFs and NFS. We can better try using docker data volumes to associate local folders with containers. In this way, the backup is more customized. After all, the environment will not change. The only difference is data, and then the file directory is in the distributed file, which can be better migrated. As long as an environment is started over there, the directory can be associated.

Original article: http://rfyiamcool.blog.51cto.com/1030776/1540414


sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdatahttp://rfyiamcool.blog.51cto.com/1030776/1540414



We recommend that you use export for backup and migration. After all, saving is too large to be useful in history!


If you are more concerned about data security, you can use data ing such as docker volumes.


This article is from "Fengyun, it's her ." Blog, declined to reprint!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.