This is a creation in Article, where the information may have evolved or changed.
"Editor's words" describes the needs of the Meizu cloud scene, how to introduce Docker, the choice of network, storage, mirroring technology, how to land and so on. Main content:
- Meizu Cloud Business Scenario
- The design concept of the Meizu cloud Docker
- The choice of network, storage and mirroring technology
- Docker Landing situation
1. Meizu Cloud Business Scenario
The demand scenario for the Meizu cloud can be divided into the following chunks:
- Mirroring requirements : Applications that need to be mirrored on Docker, we have a mirrored dockerfile,push to the Mirror warehouse, which stores all the mirrors used internally by the business operations. Then we will warm up these mirrors in the host hosting Docker and speed up;
- Machine Application : We have a history of the use of KVM baggage, so when the machine application will go through different channels, KVM and Docker;
- Business Release : This is the most used scenarios, on the Docker on a problem, how to shield off the difference between KVM and Docker machine, we have a self-developed publishing system, it will automatically block out the difference between the two, each to go their own release process;
- monitoring : Monitoring the life cycle of the container.
From these scenarios or requirements, we can know that before using Docker, we have a historical burden, not only from the use of KVM, but also more is the operation of the tool chain/operation of the platform to build.
Meizu operation and Maintenance system
Our operation and maintenance system is already complete, standardization, automation, platform, safety each block has been built. In this perfect system to introduce Docker, it is doomed that we can not copy the traditional Docker practice, here we abstract some ideas, these ideas also determine how we play Docker, how to complete the Docker practice.
2. Docker Design Concept
Container created by Docker takes a long time to run
This concept is especially important, meaning that our Docker container will run for quite a long time once it is created, showing the nature of a virtual machine.
The first major historical reason for this feature is that we have a monitoring system that needs to record the entire VM's history, and in the event of a problem, we need to compare this machine to the IP-identified machine, the whole behavior, memory, CPU, disk, etc. in a week. If container soon be killed, send a package, container becomes a brand-new, which is unacceptable, historical monitoring data all gone.
Container has an independent, unique IP
Each container has a lifetime/unique code name, which is the intranet IP, this IP from its creation to the final destruction will not change.
Inter-host container communication is through a sophomore network and isolated through VLANs
Open SSH
This is mainly due to the use of KVM-based operations and development, if the direct ssh shut down, will be spit slot, and finally we are open ssh, and through the Fortress machine for permission control and audit.
That's how we started our Docker practice, which made our container behave like a KVM virtual machine to meet the needs of the business and adapt to the existing operations platform.
3. Technology selection
We are mainly in the network, storage, image technology choices.
Network: OVS & VLAN
The network we are doing in the sophomore structure, is the way to OvS plus VLANs.
- Host installation OvS, configure the Network bridge on OvS
- Container's NIC is associated with the OvS bridge.
- VLAN tag on OvS Bridge, let OvS package and package
- OvS with the physical network card, with a
bond主备
mode to connect the physical switch, to ensure high availability
- The host physical network card with the switch is a
trunk
pattern
- Container and OvS connection, with the
ovs internal port
mode
Storage: DEVICEMAPPER+LVM
- Docker Storage System we choose to develop relatively early and more mature and stable devicemapper, and devicemapper bottom we directly use bare device as pool, to improve IO performance
- The data disk storage uses LVM, divides the disk into multiple logical volumes, each container allocates a volume as the storage of the data disk, in addition to limit the volume that the container uses, can also achieve the purpose of adjusting the container volume size online.
Mirrored storage and synchronization
- Image management uses distribution, the front-end set LVS to do load balancing, to ensure the high availability of distribution
- Mirrored storage uses CEPH distributed storage to provide high performance for mirrored reads and writes, as well as for the reliability of mirrored storage
- The synchronization of the mirrors between different data centers is done through the distribution notification mechanism plus the back-end synchronization mechanism.
Monitoring
monitoring/alerting can be said to be one of the core functions of the operation and maintenance system, we already have a very mature monitoring alarm platform, and the students of the operation and development have become accustomed to the platform. It will take a lot of effort to re-develop a surveillance alarm platform, and it won't make much sense.
We run an agent inside each container, obtaining information such as cpu/memory/network IO from below/proc, and then escalate to the monitoring alarm platform. By default, the proc inside the container displays information about the host, and we can override this part of the proc information by getting the statistics in the Cgroup.
4, Docker practice landing
Image creation
The creation of images we do through the platform, of course, directly with Dockerfile to build is not a problem, but in terms of controllability will be weaker, we do this through the platform, reduce the learning cost, but also improve the quality and security of the image, such as can limit which packages can be installed, Which ports are not open and so on.
Container creation
The container on our platform is the same as the virtual machine, the only difference is the type, a type is KVM, one is Docker, the other is the same as the previous play, the cost of operation and development access learning is very low.
Elastic Scaling
- Container can be expanded and scaled to increase the utilization of resources, where it is noted that shrinking may lead to service unavailable
- Expand horizontally by business, delivering multiple containers at once to business parties
Container Publishing
Container release, we are the same as the KVM virtual machine release, put a war package on Maven, the virtual machine then pull the package down from Maven, and then run up.
Summarize
- Provides a user experience similar to a KVM virtual machine
- Use Docker for high-density deployments
- Fast Elastic Scaling
In the process of Docker container introduction, our idea is to provide a KVM-like virtual machine user experience, so that users can use the container without barriers, we introduced the container for its lightweight and low overhead, can be high-density deployment, squeeze the physical machine resources, improve resource utilization, thereby saving costs , in terms of elasticity, we can quickly create/destroy and meet the flexible needs of our business by using the lightweight/convenient container.
Q&a
Q: How is the load Balancer LVS combined with the container?
A: At this stage our container is when the virtual machine to use, there is a fixed IP, so LVS in accordance with the traditional way to use, in the container with VIP, if the Fullnat way is not required.
Q: Which of the Meizu cloud Orchestration Tools is preferred, is there a hybrid cloud plan?
A: At this stage we are the original platform to do the matching. Today we are talking about the private cloud internally, and the public cloud is developing the testing phase.
Q: What do you do with your shrinking capacity?
A:CPU, memory is modifying the Cgroup configuration, and the disk is being shrunk by LVM.
Q: How is the log centralized storage and log view implemented?
A: We have a unified log management center based on the elk implementation, logs are sent to the log center storage, can be viewed on the management platform.
Q: "By default, the proc inside the container shows the host's information, and we can override this part of the proc information by getting the statistics in the Cgroup" is this agent your self-developed? Is it convenient to say the details? Thank you!
A: The simple point is that through the interception system and programs to read the proc directory cpuinfo, Meminfo and other resource files, let it read us through the Cgroup calculated files, so as to get the correct use of container resources.
Q: Between container VMS and virtual machine containers, what is the business scenario and what is the preference?
A: Because we have the history of the virtual machine baggage, so the floor plan will choose the virtual machine container, if there is no historical burden, can directly on the container.
Q: Are there state and stateless containers in the container , if the container is stateful container distributed storage is how to deal with it?
A: Stateful container storage is available in two ways: one is to mount the local directory to the container, and the other is to mount the distributed storage inside the container.
Q: What equipment is the capacity system mainly used to monitor? Are there any features that are not sufficient to anticipate in advance?
A: Monitoring business resources is a usage situation. A lack of business capacity or wasted capacity can be an early warning.
The above content is organized according to the October 25, 2016 night group sharing content. Share people
Lin Chonghong, 2014, joined the Meizu Research and Development Center, Ninovis architect, responsible for the development of Meizu Cloud Platform virtualization and Meizu log platform, including: KVM Virtualization, Docker containerized, distributed storage, centralized log system。 Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.