Docker Getting Started _php tutorial

Source: Internet
Author: User
Tags docker hub docker run docker registry

Getting Started with Docker


This article is fireaxe original, using the GPL release, can be freely copied, reproduced. But reproduced please maintain the integrity of the document, and indicate the original author and the original link. The content may be used arbitrarily, but no warranties are made as to the consequences arising from the use of the content.

Author: haoqiang1531@outlook.com Blog: fireaxe.blog.chinaunix.net


1. What is Docker in principle, Docker is based on a technology derived from LXC and Aufs. Q: What is LXC? A:LXC is a Linux kernel container, equivalent to a Linux lightweight virtual machine. Compared to virtual machines such as virtual box, VMware, and other instruction set virtualization, the advantage is the kernel of the host system. As a result, LXC can be seen as a virtual machine that shares kernel. The disadvantage of LXC is also due to the use of the host's kernel, resulting in container running in the Linux system must also be. If you need to use a non-Linux system, you can only use virtual machines such as VMware. Q: What scenarios need Docker? It's better to read the following before we discuss it, but based on the importance of the issue, it's decided to move on to the front. (Generally speaking, most of them only look at the first three paragraphs ...) 1) Cloud Deployment (this part has not played, can only be hearsay) used to be a virtual machine, after the Docker, a part of what the operating system did not require the application, immediately moved over. Of course, LXC, but, any use of the cloud platform, is a large-scale application. LXC does not have the ease of deployment and ease-of-mobility required for large-scale applications. 2) The company's CI platform is the construction of CI platform, can use Docker technology to achieve the separation of the CI platform tools, improve the flexibility of the upgrade. At the same time, the tool is backed up by means of image. (data backup method) The second is the packaging of the test environment, the use of dockerfile, automatic use of the latest version to synthesize a testable environment, can eliminate environmental interference. At the same time, once the test is complete, the image can be released externally to avoid the various configuration problems caused by the customer reinstalling the software. (previously to adapt to a variety of environments, now well, attached to the environment to publish together, do not have to test multi-environment) 3) development environment for rapid construction with Docker implementation development environment, once who has a new computer to build a development environment, directly pull an image over, minutes to get it!!
Q: Why is lxc able to achieve isolation? A: In fact, the Linux boot principle is to start Kernel,kernel in the start user space. It is not possible for kernel to start more than one user space. All that is needed is kernel internal isolation. This is why LXC needs to be implemented in kernel, and user space does not necessarily need to know that there are other user space in addition to himself.
Q: Why are different distributions running on one system at the same time container? The difference between A:linux distributions is that the user Space,kernel is the same, which also facilitates the simultaneous operation of different distributions. LXC only provides kernel within the container, and then constructs different user spaces according to the needs of the different distributions.
Q: Since LXC has provided the container, why not use LXC directly? A: In fact, it's not all about Docker, it's about application scenarios. LXC is essentially a virtual machine technology, and if I'm working on a different release or a different version of a unified distribution, then using LXC is completely enough. Docker is more like a division of services. Now the system is becoming more and more complex, and running on the same machine will have all kinds of mutual interference. At the same time, it is not conducive to migration, if you want to move a service to another machine, you will encounter a variety of environmental configuration and dependency problems. This is also possible with LXC, but since each container of LXC provides only kernel support, the user-state environment needs to be reconfigured. If there are three container that need Apache server, I need to install it in every container, which is obviously a waste. Or if the GCC compiler is required for different development environments, multiple copies are also installed. Then another person played the attention of the reuse part of user space.
Q: How do I implement the user space level reuse? A: At the beginning, Docker is a technology based on LXC and Aufs. Aufs is to allow users to reuse part of the userspace. User space is essentially a file system. Therefore, the reuse of user space can be seen in the reuse of the file system. Aufs can be used to stack multiple directories, and can set the read and write properties of each directory individually. Simply put, LXC generates a file system that is completely isolated from the outside world for each container, so that from the point of view of user space, you are the only operating system; Aufs on this basis, the stack enables multiple container to share a subset of the file system. What is the significance of file system sharing implemented by Q:AUFS? A: For example, I will use two container, respectively, to put MySQL server and Redmine server. Operating system requirements are Ubuntu. On LXC, I need to construct two container that contain Ubuntu, and then install two software respectively. On Docker, you can construct an Ubuntu container and then construct two container for MySQL server and server based on this container. The part of Ubuntu is read-only for its derived container. Then, if one day users find that a few applications need to be added on MySQL, it is convenient to derive from MySQL server container. This enables reuse by means of derivation. More details can be consulted: 10 pictures take you in-depth understanding of Docker container and Mirror (http://dockone.io/article/783) Q:image and container difference? A: The image is actually a read-only copy of Container,image equivalent to container. If the child container is directly multiplexed with the parent container, then the other child container of the parent container is affected when the child container modifies the contents of the parent container. Therefore, the parent container is made into a read-only image so that its child container cannot modify it. On the other hand, container is dynamic, similar to a set of code that is managed in git, and the image is equivalent to a commit (the command that generates an image from container in Docker is also a commit). Container can only be used by developers themselves, only to commit itOnce an image is generated, others can then pull out of the branch on this basis for parallel development. Of course, after the commit, image only exists locally, if you want to co-develop more people, you also need to use the "Docker push" command to push the image to the server. Docker's servers are called Docker registry.
Q: What is Dockerfile? A:dockerfile is the script that generates an image, and the deployment of a common production environment. Example 1: I've developed a set of software that needs to publish a Docker image every week. The normal process is to pull a base image, then download and install my software, and finally commit to a new image for publishing. With Dockerfile, I can automate this process by running one Docker build command at a time, using the written dockerfile to generate a new image. Example 2: A production environment relies on multiple components, but these components are also constantly being updated. If you use image, you will need to package and resend the image every time after the update is complete. With the dockerfile is much better, every time you need to update the environment, only need to rerun the Dockerfile, it automatically according to the command to download and install the latest components. In conclusion, we are emphasizing the role of a dockerfile: a scripting language that automates the packaging process of the environment. The role of the development process is not very large.
2. Common commands
Command Explain
Create [--name Container-id] Creates a container based on the specified image
Start [-ti/-d/-v] Launches the specified container
-ti establishing virtual terminal disease connection
-D background runs, and the command does not exit after completion
-V maps the directory of host to container
Run [--name Container-id] ' Docker create ' + ' docker start '
PS [-A] In the running container
-A All container
Iamges [-A] All image
-A all the image and the layers that make up the image
History An image and the layer that makes up the image
Stop Container shut down the machine
Pause Container Pause
Rm Delete Container
Commit Create a new image based on container
Rmi Delete image
Pull Contract step specifies image to local
Push Fit in image to Docker hub
Login Login to Docker Hub

3. Docker routines this direct reference to the following link: http://docs.docker.com/mac/started/
4. Data volume and Data Volume container (Volume & Volume Container) 1) Data volume and data volume container meaning data volumes: Separation of data from applications. Data is not included when you actually apply a backup. Data for a separate backup. Because backups of data and applications often require different policies.
The data volume container is used to isolate the actual application to host through the data volume container. When the data is changed in the host position, only the data volume container needs to be modified, and the other application class containers are not modified.
2) Create a container with a data volume using a data volume: Docker run-v/data/path:/mount/path:ro--name dbdata ubuntu/bin/bash Create a container with two data volumes: Docker run-d -v/data/path1:/mount/path1:ro-v/data/path2:/mount/path2:ro--name dbdata Ubuntu/bin/bash $ docker Run-ti-- Volumes-from dbdata--name app Ubuntu
The directory "/data/path" from host to mount to container dbdata "/mount/path" directory via-V. Dbdata becomes a data volume container. The actual application of the container app is generated based on dbdata. Obtain read and write access to the data volume container dbdata with "--volumes-from dbdata".

This article is fireaxe original, using the GPL release, can be freely copied, reproduced. But reproduced please maintain the integrity of the document, and indicate the original author and the original link. The content may be used arbitrarily, but no warranties are made as to the consequences arising from the use of the content.

Author: haoqiang1531@outlook.com blog: fireaxe.blog.chinaunix.net

http://www.bkjia.com/PHPjc/1068090.html www.bkjia.com true http://www.bkjia.com/PHPjc/1068090.html techarticle Docker Primer This article is fireaxe original, using the GPL released, can be freely copied, reproduced. But reproduced please maintain the integrity of the document, and indicate the original author and the original link. Content can be used freely, ...

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.