Original: Docker's applications in cloud Household Management declined to copy and paste content.

Source: Internet
Author: User
Tags docker run

Original: Docker's applications in cloud Household Management declined to copy and paste content.

Our company currently uses Docker on a large scale. Currently, all applications except database applications are running in Docker containers. Below I will share some of the Docker applications in our company. ,

First, let me introduce the background of the company. The company is a small and medium-sized entrepreneurial company with a small number of servers. But to solve some problems, we have introduced the popular Docker technology.

 

Let's take a look at the problems we encountered before using Docker:

1. The online environment and test environment are inconsistent. As a result, some bugs may occur after the tested functions are launched.

2. The steps for deploying a new project are cumbersome. After the runtime environment is deployed in batches, You need to manually modify the configuration parameters based on the different situations of each project.

3. It takes a long time to deploy the new project environment. It may take tens of minutes or even longer to deploy some projects.

4. The differences in operating system versions cause problems in batch deployment

5. The environment cannot be deployed across platforms.

With these problems, we need to solve them.

Here I will give a brief introduction to Docker

Docker is a new open-source containerized project. It was born in the early 2013 S. It was originally an amateur project within dotCloud. The project was later added to the Linux Foundation and followed the Apache 2.0 protocol, implemented based on Go language released by Google

Docker provides a container that can run your application. It can package applications and dependencies to a portable container and then publish them to any Linux machine.

Docker extends the Linux container (Linux Containers) by using a high-level API to provide a lightweight virtual environment for processes separately. It is similar to the concept of virtual machines.

After learning about Docker, let's take a look at how we use Docker. Here, let me introduce the background of the company. The company is a small and medium-sized entrepreneurial company with a small number of servers, no Docker cluster management tools such as Kubernetes and Swarm

We all know that to facilitate the deployment of Docker, a private Docker warehouse is generally required to store images. We also have our own private warehouse to see what our company's private image warehouse looks like, which images are stored in the image.

Our image repository stores Application Service images, such as Tomcat and Nginx, API service images, NoSQL images, such as the Redis service, MongoDB service, and ES service.

These images are packaged environment Images Based on our actual needs. What services are required for the new project? Directly pull the private repository image for rapid deployment.

With the image repository, let's see how we made the image.

We use Dockerfile to create an image. Each environment has a corresponding Dockerfille file. You can adjust the image as needed.

Take one of our application service environment images as an example (Nginx + php). Let's take a look at our image production process:

1. Pull PHP5.6 from the official Docker image repository as the basic image

2. Install Nginx and PHP extensions based on basic Images

3. Modify Nginx and PHP configurations

4. Generate a dedicated image for the specified service

5. Submit the generated image to the private repository.

Let's take a look at the company's Dockerfile and the image building command:

Dockerfile content:

 

FROM php: 5.6.31-fpm

 

RUN apt-get update & apt-get install-y \

Nginx \

Libfreetype6-dev \

Libjpeg62-turbo-dev \

Libmcrypt-dev \

Libpng12-dev \

Libxml2-dev \

Libssl-dev \

Git \

Vim \

 

& Pecl install redis mongodb mongo \

& Docker-php-ext-enable redis mongodb mongo \

COPY./nginx_vhost_conf/*/etc/nginx/sites-enabled/

Docker build-t hub.yunjiazheng.com/front_web:v1.0. Image building command

Docker push hub.yunjiazheng.com/front_web:v1.0 submit the image to the private repository.

Next, let's take a look at how we can quickly deploy the environment using images.

First, after the operating system is installed on the server, the Docker client will be installed during system initialization.

On the server, you only need to execute docker pull to pull an image. Then run docker run to start the image, so that you can quickly deploy the desired environment.

 

# Docker pull hub.yunjiazheng.com/front_web:v1.0

 

# Docker run-d-p 80: 80 hub.yunjiazheng.com/front_web:v1.0

Docker deployment command

Let me explain the two commands:

Docker pull hub.yunjiazheng.com/front_web:v1.0

The front_web image is pulled from the private image repository hub.yunjiazheng.com. The image version is v1.0.

Docker run-d-p 80: 80 hub.yunjiazheng.com/front_web:v1.0

This command-d runs the container on the backend, and-p is the port 80 mapped to the container. Then start the container

In this way, a required environment is deployed.

After reading the Docker deployment environment process above, there is a question: how can containers running on the same image distinguish between the test environment and the online environment.

To differentiate the running environment of containers, we need to use the cloud housekeeping O & M platform.

The cloud housekeeping O & M platform is a self-developed platform that integrates environment management, configuration management, release management, task management, and other functions.

In environment management, multiple environments, such as beta and online, are created first.

After the environment is created, different configuration parameters will be added for each environment, and a set of environment can be automatically deployed by selecting the host and image and the environment to be released during the release.

For example, specify server A to deploy the A1 project's test environment:

The O & M platform automatically logs on to server A, pulls the environment image required by the A1 project, pulls the A1 project code, and then pulls the test environment parameters configured for the A1 project on the platform, then start the container to automatically deploy a runtime environment.

Let's take a look at our environment management interface:

 

The following is the interface for managing environment parameters:

 

 

Configure different parameters for different environments.

Configuration Management in the O & M platform allows you to manage configuration information such as online and test environments, and add, delete, and modify configuration information such as database information and redis information for code connection.

The O & M platform provides APIs for configuration information in the test environment and online environment. When a server container is started, different configuration information APIs are obtained based on the server type to obtain different parameters, deploy the server into different application environments.

 

The implementation logic is roughly shown in the figure above.

Next, let's take a look at the application interface we have deployed through the O & M platform:

A host is a published host. The version is the version of the container running image, and the status is the running status of the container. You can remotely manage the container here.

Currently, all cloud Housekeeping services, except databases, run directly on the operating system. All other application services are containerized, and each project service has a corresponding image, you can quickly deploy services within a few seconds.

The O & M platform automatically manages containers by calling Docker APIs on the server.

Now let's take a look at what benefits we get after introducing Docker:

1. ensure the consistency of the running environment. The online environment and the test environment use the same image. After the test environment passes the test, no bugs will occur after the launch due to environment differences.

2. It is convenient and convenient to deploy new projects. Automatic deployment fails because of operating system differences.

3. Fast deployment of new projects. A project environment can be deployed in seconds.

4. After the service image is created, it can be quickly deployed multiple times to facilitate quick horizontal scaling of the service

5. cross-platform deployment is supported.

At present, our company's O & M platform is still incomplete because of some functions. After improvement, the O & M platform will be open-source in the future.

The above is a sharing of the company's usage of Docker. If you have the opportunity to share it with us later, we will share our O & M platform.

 

Thank you for watching.

Q1 have you used the orchestration tool?

 

Currently, the company has not used docker orchestration tools. The company's O & M platform uses docker interfaces to automate docker management.

 

Q2: how does your company ensure that the container is still running, the services in the container have been suspended, and the services can be normally provided to the outside world.

 

Currently, some scripts are developed using python to monitor the resources inside the container. The O & M platform will integrate the monitoring and alarm function to monitor the resources and services inside the container.

 

Q3 what network does your company use? How does one call an instance across hosts? Will the ip address change after the instance is destroyed.

 

Currently, the docker default network is used. Container ing port is used for cross-host calling. Changes in the internal ip address of the container do not affect the calling.

 

Q4 are logs in the container? What if R & D depends on logs.

 

Some log projects map code to the host machine, and the development depends on the host machine. However, our O & M platform will also support viewing specified files or logs later. Q4 are logs in the container? What if R & D depends on logs.

Some log projects map code to the host machine, and the development depends on the host machine. However, our O & M platform will also support viewing specified files or logs later. Q6 how long does one build and release nginx and php in a container take?

This is very fast. Image pulling and code pulling are Intranet pulling. The first time may be a little slow for 2-3 minutes. If the image is not updated later, it will only be updated in seconds.

 

 

Q5 how does docker persistent storage work?

In addition to the regular automatic data backup of data for important applications, important applications such as databases do not run in containers.

Q6 how long does one build and release nginx and php in a container take?

This is very fast. Image pulling and code pulling are Intranet pulling. The first time may be a little slow for 2-3 minutes. If the image is not updated later, it will only be updated in seconds.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.