One of the most frequently answered posts in the Docker forum "upgrade data within data containers"

Source: Internet
Author: User
Tags postgres database glusterfs

One of the most frequently answered posts in the Docker forum "upgrade data within data containers"


Matlehmann
I have a container with data that has persistent data in a volume (for example, in/var/data). The container contains persistent data to the software of another container.
For new versions of the software, you need to upgrade the permanent data (structure or layout changes, etc.). The result is that I want another data container (in/var/data) with the upgraded data in the same location and still keep the old data container with its data intact.
In this way, I can use the old data container with the old version of the software, in case something goes wrong.
But how can I do that? The steps required to achieve the desired results are not obvious to me.
I can run a command to upgrade containers like Docker run-i-T--name temp--volumes-from data-v/upgraded image/upgrade_script.sh
But then, how do I restore the upgraded data to the original location without overwriting the old data? If I run Docker run-i-T--volumes-from temp image cp/upgraded/var/data, it will overwrite my old data. Do I have to use a host-installed volume for upgraded data, or is there a better solution?

Sam
Just guessing, because in general I prefer to use the direct host installation volume and I am struggling to find the usefulness of the data container.
But...... Can you submit your data container and then save the image, etc.?
Sven
Oh, also consider using the Docker commit snapshot container Sam's advice is dead.
Keeb
I actually use data containers such as UNIX pipelines; I think they fit more naturally in the paradigm
Docker Run-name Some_pipe_storage Some_container_which_generates_data
Docker Run--volumes-from some_pipe_storage something_that_operates_on_data
syntax, quite tedious. Very powerful but relatively primitive.

Sven
There are some interesting jobs about joining the volume management tool Docker is going on-I think they are going to 1.4 and I will do some research. (There will be a list of Docker volumes and things to manipulate)
I might do a backup_data volume in the container and then run a data migration image, I want to run the data migration to the data and backup_data-it may do the first thing that is copied from the data to the Backup_data, and then it will do.
Then you can run the old and new, connect to each of the respective databackend (with a read-only backup that might attach?). )
Do this if you use the host installation should be almost the same, either directly or through the data container style represented.

Matlehmann
Your suggestion is my first thought line, but it doesn't fit my expectations, because after the process, migrating data and raw data will be based on different paths, I don't see any way to change this situation because non-host volumes cannot be reinstalled to different paths. The path from the volume of a data container is static-even for a volume container inherited by "--volumes-from".
This is different from the host volume because I can change the mount location of these in each Docker run call.
I think these volume management tools are very necessary for you to say. For me, this data container Docker idiom feels more like a workaround.
May I ask you to explain "Docker commit is a dead man" because I can't see it, however. At least in the use case in hand. As far as I know, a docker is presented to me, which contains a new image of the current state of a container. This will include all the OS data I'm interested in except for persistent data.

Sven
Oh, crud. You're right, the volume path is currently static. So you need a step
1. You have existing data containers in/data
2. Migrating to a temporary data container located/migrated (if you have an original installation)
3./migration data is migrated to the data container/data installed in the newly upgraded (second migration image does not require the original data capacity
@ cpuguy83 may be able to tell you more about the new tools smile
WRT Docker Commit-when you commit, you are not everything in a single containing image layer, you are making a new image layer that contains all the changes made in the container (with the image container when it is started).
So if you use a container instead of a volume of persistent data and a log of stuff like that, you can use Docker to promise that snapshots/backups are just your persistent data-and Docker exits may allow you to store these layers.

Cpuguy83
Yes, I will not trust "commit" because you will be limited to 127 submissions unless you flatten the image out.
@matlehmann See Github.com/cpuguy83/docker-volumes
It's a long way from the perfect solution, but it worked pretty well during this time.

Matlehmann
@Sven Thank you for your reply and for more information. I still don't understand. The 3rd step of the "/migration data migration to the newly upgraded data container installed in/data (the second migration image does not require the original data capacity to be loaded", because it is currently (with docker1.2 and no special Volume command), I don't see how I can have a container with another container volume-- Part of them is installed, some of which are not installed as I see it, it is either all-or-nothing, either to "--volumes-from Other_container" or not. Therefore, if the migration container from step 2 has the original data loaded, the container is installed in step 3, and therefore the original data is overwritten in the copy operation from/mifrated to/data. Or did I lose something?
For the "commit" command thank you for your prompt, I need to think about this a little more prepared.

Matlehmann
@keeb This is a good model, but as far as I can see, it doesn't solve this problem, what I'm saying. All these "pipe" containers will still be in the "some_pipe_storage" volume and cannot create a different container, with different data in a given path, without overwriting the original data. But maybe I miss your point of view?

sven
Well, let's see if I can make such an explanation is an example:
Suppose someone has created some Docker image, Webappv10,webappv11, Webapp_migratorv10_to_v11.
Initially, you will already be running based on 1.0 system
Docker run-v/data--name datav10 busybox true
Docker run-p 80:80--volumes-from datav10--n Ame webv10 webappv10
Then upgrade, ask for your data to upgrade to me, you will do the 2nd step (as you point out, we cannot have two volumes in the same directory)
Docker run-v/migration--name Datav10-to-v11 busybox true
Docker run--volumes-from datav10-to-v11--volumes-from datav10--name Migration Webapp_ MIGRATORV10_TO_V11
Then step 3, migrate the replicated data to a new data container, data in the/data directory, prepare to use
Docker run-v/data--volumes-from datav10-to-v11- -name datav11 busybox cp-r/migration/data
Then run the version 1.1 Web application
Docker run-p 80:80--volumes-from datav11--name webv1 1 webappv11
and the extra credit, you'd better script it all. The
Update to DATAV10 to V11 volume container is incremented because of the following discussion

Matlehmann
@sven Thank you again for your detailed answer. I appreciate your thoughts and your time.
However, the program you outline does not work. It calls a "-V" volume to overturn the assumption that a volume inherits the same path as "--volumes-from". I just tested it again to make sure, but that's not the case. This is why Docker run-v/data--volumes-from migration--name datav11 busybox cp-r/migration/data will overwrite the container datav10 of my original data.

Sam
It has a special reason for you to like a data container in a simple volume (that is, easier to feel by feeling/handling)

Sven
I was confused. Elaborate steps so that there will never be 2 volumes from the contained/data dirs statements. We used to migrate a buffer and then we copied it to the new data V11 container.
@sam-There is a conceptual difference between the amount of a binding installation and the volume container-you're still doing the same 3 steps-the biggest difference for me is that the bound mounts are only local, and assuming you have disk space for it (not me), and the Volumetric container method assumes you docker, The data partition is large enough to run Docker's container, and works like a local remote.
If you change the Docker run-v/data ... lines to Docker run-v/local/data:/data ..., then you use BIND mounts bind installation, Representative.

Matlehmann
@SAM I now switch to the installation using bind-mounted bindings (or any official term is "-v/host:/container") volume instead of using the data container because of the drawbacks listed in this thread. I started using data containers because idioms are used and recommended by all on the internet and seem to be "official way".

Matlehmann
@sven
?" Datav10 "has?" /data "(Via-v/data)

?" Migration "has?" /data "(via--volumes from DATAV10)
?" /migration "(via '-v/migration ')

?" Datav11 "has?" /data "(Via-v/data)
?" /migration "(via--volumes from migration)
?" /data "(via--volumes-from migration)

So we have the "/data" container "datav11" two volumes defined-for me it looks like one of the victories from--volumes-from.

Sam
@sven I think I'm just trying to figure out why you will store the data in Aufs, which seems to be a wrong file system for this issue. Btrfs will be OK, but it seems aufs log files, a strange choice, Postgres database use and so on. I misunderstood the mechanism of the data container?

@matlehmann we use "volume" extensively.
? We store outside the container's host installation volume, which facilitates the rotation and durability of all logs. (Here is another option is to flow out of the container, but the mechanism is extraordinary, think about the Nginx container, you use the log?) )
? We store some configuration on one mounted glusterfs volume, so we can absorb things across the farm

Matlehmann
@sven This will be the topic of another thread: I really want to hear your glusterfs settings. Is there ready to use an available image that can act as a glusterfs within a volume or how do you do it?

Sam
@supermathie will be the most suitable for our glusterfs setup details, but it is in a very traditional way, all settings, we do not use the trusted Docker container on the power gluster.

Sven
OH (*&^, you're right.)
I think I was so smart to remove one step.
You need to take/migrate the folder to your own data container so that you can avoid the last step of the problem.
I have updated the example step to reflect this

Sven
Yes, there's something WRT file system to solve-my Docker is mostly btrfs run.
I think that using bindings to install the amount of work too-but I will still do the container to link to them and then use the same process as the volume of the data above.

Matlehmann
@sven Thank you. I didn't try this, but it makes sense. It's an awkward crazy chicken dance, but it might work ... The volume command eagerly awaiting.

Sven
Yes, I concentrate on the clumsy chickens--looking forward to chanzui with these versions, and roast chicken legs to

This article is translated from the Docker official forum: HTTPS://FORUMS.DOCKER.COM/T/UPGRADE-DATA-WITHIN-DATA-CONTAINER/205/20

One of the most frequently answered posts in the Docker forum "upgrade data within data containers"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.