This is a creation in Article, where the information may have evolved or changed.
"The editor's word" Now Docker technology is getting hotter and more companies are starting to deploy applications using Docker technology. This share is about how to take full advantage of Docker technology to deploy code to the online environment, as well as the problems and concerns that are encountered in using Docker.
Background
We started researching and using Docker in 14, and initially wanted to use Docker to solve the problem of component and service on-line. Now slowly evolve into a set of solutions that provide services from the code environment to the online deployment. Now this system is inside what we call harbor.
The road of actual combat
Version 1.0
Our first edition uses Docker-registry to build a private mirrored warehouse, and uses the Python-developed command-line pattern to manage Docker hosts and containers, using the concepts of App->service->container and host clusters. One app for multiple service, multiple service to form an app, one Servier for multiple containers, and each server has a corresponding configuration.
Configuration information includes: memory, name, business description, mirroring, number of containers, export port, environment variables, and so on. Provides basic deployment, delete containers, view containers, add host clusters, and more. Apps can be deployed in different host clusters.
The 1.0 architecture is relatively straightforward, and we install a Python-written agent agent on each Docker host, specifically registering container and host information with the ETCD to provide service discovery capabilities.
When the command-line tool is deployed, the container is deployed on different physical or virtual machines by calling Docker's API, depending on the information in the ETCD, and depending on the configuration of each app, using a different policy.
The deployment strategy ensures that multiple containers under the same app are scattered across different hosts, preventing the host from downtime and causing service unavailability. At the same time, based on the concept of the remaining assignable container memory, a host is guaranteed not to deploy the container too much.
App-service-container Logic:
The 1.0 structure is as follows:
Version 1.0 basically solves the rapid deployment of application containers, but there are many drawbacks:
- The user or developer needs to write Dockerfile directly and make a Docker image from the Docker build command. Need training Dockerfile grammar, learning cost is higher.
- The user's code adds Docker-related logic, such as Dockerfile, Volume. Code that pollutes the user or the developer
- There is no uniform specification for the image name and version definition, and there is a risk of mirroring the name.
- No visual interface, the number of apps and containers to reach a certain scale, there is a problem is not easy to troubleshoot.
- Grayscale upgrade without support for app service
- Server upgrade, you need to modify the configuration information of the server, such as the image version, environment variables, export port and so on, once the modification fails, want to fall back to the previous version, you must manually back up the configuration file in advance.
Version 1.5
After a period of promotion, we launched the 1.5 version on the basis of 1.0:
Version 1.5, based on the 1.0 structure, retains the app-service-container concept, while we mainly make the following improvements:
- Recommended users or developers, code management uses Git.
- Using Jenkins, the user and developer code is associated with Jenkins, and once the user submits the code on the corresponding branch or hits the tag, it triggers the corresponding job of Jenkins to automatically build the image. Implementation of automatic code-to-mirror construction.
- Use Tornado to build a set of Visual Web services and provide rest API interfaces. The Web interface integrates the mirrored list queries of the private image warehouse, and the features viewed by a single mirrored version. With the Web interface, users and developers can easily create an app and associate a mirror.
- Provides a unified base image that only requires the user to provide a dockerfile and start.sh script that defines the operating environment, which is used to define the operations that are performed after the container is started, to ensure that the container starts, and that the service starts.
- Macvlan is used in the network.
- User resources are isolated, and each user sees only their own containers and apps, service resources.
Version 1.5 solves the problem of code-to-line environment deployment to a certain extent, and users can see their apps and container resources intuitively. And the health of each container. But in the process of internal promotion, we find that there are a lot of unsatisfactory places:
- The use of Docker, Jenkins, and harbor must be trained to the user or developer, and the promotion of each system or technology will take a lot of effort to explain.
- There is still a risk of mirroring name collisions.
- Users still need to write dockerfile, put Docker logic in the code environment, to some extent polluting the code.
- Jenkins feels heavier and needs to log in to the Jenkins system to view the image build process.
- There is no version concept for apps, service, and no version-based rapid upgrades and rollbacks.
- Can not provide a good way to enter the container, the user can only know the IP address and password after the SSH connection.
- Macvlan requires a dedicated network environment to partition VLANs specifically for physical machines.
- There is no good application resource isolation and sharing mechanism, user A created the app or container resources, only user A can see, other users can not view, if User A leaves, then you need to manually transfer resources.
Version 2.0-Building a private cloud environment
2.0 on the basis of the previous version of some new ideas, the overall idea is to follow the public cloud thinking to build a private cloud, and responsible for the development of the application to the entire life cycle:
- User resource isolation and member management, user A's app or host cluster resources can help manage user A's resources by adding members. The use of resources can also be transferred by transferring the owner. Embodies a concept of teamwork.
- Remove the previous service concept and change to the concept of app->version->container AppGroup, a group that contains multiple app apps, each with multiple versions of version, Each version can contain more than one container. Each app version contains: Image version, description, environment variables, export port, volume mapping, and other advanced configuration (SSH, Web ssh password). Each container is created according to the version information. Therefore, it is convenient to implement the container upgrade and rollback.
- The original docker-registry has been modified, plus the concept of permissions, to prevent the creation of the same image name. and using namespace, you can have multiple mirrors under a namespace. It is logically possible to indicate some kind of correlation of this set of mirrors. For example, a department can create a namespace, the image of this department is placed under this namespace.
- The concept of public and private clusters, clusters we refer to a set of Docker hosts, both physical and virtual machines. Public clusters are open to all users, and users and developers can deploy their own app applications on these public clusters. Users and developers can easily create their own private clusters and add Docker hosts. Applications can be deployed in their own clusters, enabling self-management of private clusters. The Macvlan and NAT modes can be used in public cluster networks. and private clusters, we now only provide users with a NAT approach, mainly to facilitate the access of developers and business units.
- Provides a one-click deployment of Docker environments. The host resources under the distribution of the business unit can easily deploy Docker and related component environments.
Discard Jenkins and build your own image. integrated into one interface. Discarding the tedious Jenkins configuration, users and developers only need to select the underlying image on the page where they want to create the image, enter the GIT address, branch name, image name, and write the image build script and image startup script on the interface. At the same time, in the Git repository, set the Web hook URL given by the system. The next time the code is submitted, the build task is automatically triggered and the real-time build process is shown on the page. The image version is forced to use: git tag+commit number + configuration change number to name, this can be convenient for users and developers to locate the image version and Git commit to associate. Show us the image-related operations:
Build log:
List of mirrored versions:
How to create a mirror:
Image Build Script:
Service startup script:
The image is divided into the base image and the business image. The underlying image includes the most basic image we provide (we only provide a few basic base images), and also includes the base image of our business that the business unit modifies on the basis of the underlying image we provide. The business unit can create its own base image, and the following business can inherit the underlying image.
- Support for application grayscale on-line, developers or business units after the creation of a new version of the app, you can select a specific container to upgrade, after the final Test pass, can all upgrade.
- Support containers are accessed via web ssh and ssh in two ways. If you do not want to access the container resources through SSH, you can access the internal resources of the container directly through the web, and implement bug debugging and problem query.
- Automatic access load balancing, intelligent load balancing system, we use NGINX implementation, using go and ETCD implementation of Nginx configuration automatically issued.
- Support event operations logging, including clustering, mirroring, application action events, and easy follow-up of issue tracking, which is an application event log:
- Supports a quick view of the topology of application deployment, such as:
The first layer represents the application name, the second layer represents the name of the cluster to which the application is deployed, the version of the application that the third tier represents, and the fourth tier represents the container name and the running state. Here, users can click on a specific cluster directly, create a specific version of the container, with feedback, which is one of the most popular features of the business unit.
The 2.0 version has made a lot of changes to the architecture than the 1.5 version, first we use layered design.
- Infrastructure level Docker host or virtual machine, and install a service discovery agent written with Go, Go-registerd.
- The data tier includes ETCD and databases, message queues, and docker-registry mirror warehouses. The service discovery agent specifically registers container and host information with the ETCD.
- The execution layer mainly includes asynchronous task processing: Harbor-compute, obtaining tasks in message queue, performing tasks such as container creation, deletion, power operation, image building, uploading, etc. Can handle a large number of container and mirror operation requests, mirror build we provide a dedicated server to provide image building operations, after the image is built, automatically push to the private image warehouse.
- Scheduling layer Marine, provides API interface, mainly responsible for each application of container deployment scheduling, mirroring the construction of the scheduling, as well as the cluster, host management and so on. When an app needs to create a container, resource decisions are made based on the selected cluster resource usage, as well as other configurations such as the number of containers, and decide which host each container should be created on, and generate the corresponding task to be served in the message queue.
- User layer, divided into user-oriented open systems: Polaris and administrator-oriented resource management system: Ursa. The overall structure is as follows:
2.0 version since the launch in November 15, has been the creation of hundreds of applications, more than 200 mirrors, hundreds of hosts, a day to build a mirror 400 times, the average daily container destruction and create more than 100. The mirror capacity is around 100G.
Harbor is basically an open container platform within the Lego Cloud computing Company Limited, the addition of hosts, the construction of images, the expansion, upgrading, and rollback of application containers, which are now fully managed by developers and business units.
The learning costs for developers and business units are learned from the need to learn Docker, Jenkins, and harbor into simply learning how to use Harbor, and save a lot of old online deployment and configuration manuals, just learn how to upgrade containers.
The next task
With the improvement of the Community container Orchestration System, we are also studying kubernetes and Mesos, and are drawing on some design ideas and ideas from the open source community.
Q&a
Q: Would you like to ask if these Docker are deployed on a physical or virtual machine? Can you increase the CPU in addition to adding memory during the expansion?
A: Both physical and virtual machines, we provide a one-click script for installing Docker and other component environments, and the business unit simply executes the script. In terms of expansion we are now only doing memory expansion, the CPU is shared.
Q: How does a mirrored build script generate a mirror image? Do you execute related scripts on the underlying image? Some port storage volume environment variables how is the information in these images resolved?
A: We encapsulate the dockerfile, and the business and developers do not need to care about dockerfile syntax, write a mirrored build script directly, and finally generate dockerfile from harbor according to certain rules, then call the Docker build to generate the image. In this process, the name and version of the image have been generated according to the rules
Q: The way of sharing resources, can not be a department or a group to add a public account, the need to share the resources, push to the public account?
A: We do not use public accounts, we use a mirror warehouse, the Mirror warehouse can have team members, the effect is the same.
Q: Abandon Jenkins What do you do with your multilingual Java compilation and Golang compiling these?
A: We do not need to care about any language, we now provide a Java common base image, compile anything can be done at the time of construction. If you need other languages, such as go, Python, you can have a business unit create its own base image.
q:jenkins Configure JDK and maven, do you want to install it yourself in a container?
A: At the time of construction, these environments can be installed in the business unit base image in advance.
Q: At the time of construction, these environments can be installed in advance?
A: The application has its own version of the concept, each application version has: Image version, environment variables, export, Volmue and other information, so in the fallback or upgrade, the final manifestation is to kill the old container, according to the version of the parameters to create a new container.
Q:harbor and Jenkins are you not based on the latter's implementation of the development of a lite version, the solution to the problem of the method and the idea has changed?
A: There are multiple reasons, we hope Harbor from code to build to deploy one go, if using Jenkins, will give the user a sense of the fragment, business unit once the business is more, it may be confusing which git corresponds to which Jenkins configuration. The cost of learning is also relatively high.
Q: Do you think that the developer's local environment does not require the use of containers?
A: The local environment generally does not require containers, but if developers want to use a container to play, they can create a container from the underlying image we provide.
Q: Recently studying Magnum and Kubernetes. There are some questions, such as miniona/b on the ServiceA poda/b respectively. So will the kube-proxy that visit Miniona be channeled to the Podb on MINIONB? The important part of service is that it acts as a load balancer and reverse proxy. Is this role a direct substitute for software such as Haproxy/nginx?
A: We are also studying kubernetes, do not dare to do too much evaluation, it is said that proxy this block performance problems. The first useless kubernetes, because he was not very stable, the change is also relatively fast. Load Balancer This piece is written by ourselves, and the entire network now has more than 300 domain names running on it.
Q: How long does it take to build an average time?
A: Now Java, Dubbo, Python, go more, generally 2 minutes, and some mirror users opened the automatic build, in their unconscious process, have been built to complete. When upgrading, select the corresponding image version.
Q: After using Harbor, does dockerfile not have to write?
A: Yes, users and business units only need to be concerned with basic container concepts (export and Volume) on the line, and do not need to write dockerfile.
Q:app Each commit is a version, is it possible to publish each time the test is completed?
A:app does not submit the concept, you say should be the mirror, we designed a mirror corresponding to a git repository and branch. When a push or tag operation occurs, the build is automatically triggered, and the behavior of the build is determined by building a shell script based on the user's written image. In general, we recommend that the business unit make an image that does not matter with the test environment and the build environment. Mirroring is mirroring, only the application has a test environment and production environment.
Q: When it comes to "building a private cloud with the idea of a public cloud," What is the technical difference in building public and private clouds? Can you tell me what you think?
A: We mainly take our internal users as external users to treat, in product design will consider this point. This is the direction we have been groping.
Q: Do you have a physical machine and a virtual machine, according to what scenario choose to run on a physical or virtual machine? is the choice of performance?
A: This is a complete look at the needs of the business side, harbor itself does not care. It allows the business to create its own private cluster and add its own virtual machine or physical host.
===========================
The above content is organized according to the March 3, 2016 night group sharing content. Share people
Zhang Jie, e-mail: zhangjie0619@yeah.net,qq:695160664, le Vision Cloud Computing Co., Ltd. Senior research and development engineer, container cloud leader, 4 years of technical team management experience, the new technology has a strong curiosity. Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.