Kubernetes and swarm comparison of container cloud technology selection

Source: Internet
Author: User
Tags k8s

The essence of swarm and k8s are container orchestration services. They can abstract the underlying host, and then start the application from a built-in image and eventually deploy it to a host on a docker basis. Which scenario should be chosen as our container cloud service? I think k8s (kubernetes short) and swarm compared to MySQL and SQL Server comparison, the former lightweight, implementation fast, in order to achieve core functions, more suitable for small-scale deployment, the latter is enterprise-class, full-featured, supporting scenes, Ideal for enterprise-class Docker cloud scenarios. Here are some of the comparisons I made to the two:
    1. There are differences in design concepts
Swarm focuses on the deployment of containers, and k8s a higher tier: deployment of applications. K8s all operations on the container are permeated with the idea of serving the application, such as the pod is to be deployed in different docker for tightly connected but unsuitable applications, but Docker shares the volume and network namespace ways, In order to achieve a close "communication", such as service is to hide the pod (a collection of containers, described below) network details, so that the pod provides a fixed access to the portal, so that other applications to easily access and so on. In addition, k8s is particularly adept at large-scale Docker management. In order to solve the deployment of applications in complex scenarios, k8s components are much more than swarm, even if the components that seem to function similar, k8s many times in the scene support to optimize the swarm, in the case of scheduling, swarm only three scheduling strategies: host load, host running container, Randomly assigned host, but k8s in addition, the strategy is richer, its strategy is more than twice times the number of swarm. For example, it also has a port conflict policy (a port conflict is a scenario that must be considered when deploying Docker on a large scale), a container-mounted volume conflict policy, a specific host policy, and so on.
    1. K8s installation complex When adapting to more scenes
Swarm and Docker are naturally integrated, easy to install and use, especially in Docker 1.12 and above, Swarm has been integrated into the Docker engine, so after Docker installation swarm deployment is half done, and swarm operations are implemented through the Docker API, mastering the operation of the Docker command swarm is very simple, basically one weeks can play compared to 6. K8s is based on Docker, but many components are developed around the deployment of the application, many of which do not rely on Docker APIs, need to be planned and implemented separately at deployment, and because many of the policies in the components adapt to different deployment scenarios, so not only do you understand the scenario requirements before deploying, And you have to know the design logic of the component. So the installation and familiarity process is much more tortuous than swarm.
    1. Docker vs Pod
In swarm, the smallest unit that is created, dispatched, and managed is Docker. In k8s, the smallest unit is the pod (Pea pod), which consists of one or more containers that are placed together to implement a particular function. Docker shares volume and network namespace within the pod, and can communicate with each other through localhost or through standard interprocess communication. What are the benefits of using pods? Let's imagine a scenario where we have a container for Web applications, and now we need to install a log plugin to collect web logs, and if you install the plugin inside the Web application container, you will face some of the following problems:
    • If the plugin has been updated, although the Web application has not changed, but because the two share a mirror, then the entire image must be built again;
    • If there is a memory leak in the plugin, the entire container will be at risk of being dragged down.
It is also inappropriate to install the plugin in a different container, because you have to find a way to solve the problem of reading the Web container's log from the container where the plugin is located. With pod, these problems can be solved. Create a container for the log plug-in and the Web App in the pod, sharing the Volume,web application container only saves the log to volume, making it easy for the log plug-in to be read. At the same time, two containers have their own mirrors, and each update is not affected.
    1. Load balancing within the container
Swarm load balancing mechanism is not widely used, most of them are nginx+consul. Nginx itself is a separate container, and consul save the various Docker application network information (IP and port), Nginx Mirror in compose, specify Consul address in Dockerfile, remove the consul of the application of the network information, Configured in the Nginx config file as a parameter for load balancing. The disadvantage of this model is that the configuration file in the Nginx container cannot be changed with the application of Docker's network information, that is, if the new Docker is added, the newly added Docker IP and application port will need to be added manually to the Nginx config file. or rebuild the Nginx container. The load balancing of the kubernetes is much improved and the internal load balancing is integrated. Also, there are good handling mechanisms for Dockerip changes: k8s is load balanced through service, which is a pod (pod contains a container, the container contains an app), and it points to a set of multiple pods with the same label. Each service is created with a record in the k8s built-in DNS server: The name of the service and the IP of the service. When you need to access the app in the pod, just access the service's name, and the pod's IP is transparent to the visitor, so no matter how it changes, it doesn't affect load balancing.
    1. Who is best suited for grayscale publishing
Both support Grayscale publishing. But Swarm's grayscale release was a stud. When the swarm update operation is performed, all old Docker is replaced by the new version. If I find a problem with the new version during the replacement process, I can only forcibly terminate the update and then perform a rollback. In this process, the application on the line will have an impact. And k8s has replication controller mechanism, can artificially control the process of gray-scale publishing. In the process of publishing, I can let k8s through the replication controller a small number of new versions of the pod and reduce the corresponding amount of old pod, the new pod can respond to user requests, if the new pod is more smooth, slowly increase the number of new versions and reduce the number of old versions , until the new version all replaces the old version, if there is a problem with the new pod, the new pod is immediately offline, thus not affecting the online business. K8s's release process can be human intervention, so in a major release, this approach is actually more excellent.
    1. Elastic Scaling
Elastic scaling refers to the process of a container deployment architecture that dynamically changes based on the host hardware resource hosting situation. For example, the CPU usage of a host is high, k8s can automatically adjust the number of pods in a deployment according to pod usage rate, guarantee service availability, but Swarm does not have this ability.
    1. Ecological
Swarm is the Docke's official cluster solution, k8s is a container-based application deployment and management built on Google to create a powerful and easy-to-use management platform. Compared to Swarm, k8s better understand the management of containers. From GitHub It is also possible to see the K8s project's star and Fork are very high, and the information found on the Internet is very rich. It was also based on K8s's ecological influence that Docker had to integrate k8s in the newly released Docker EE (Enterprise Edition).Conclusion:To sum up, k8s as an enterprise-class container cloud solution, more worthy of our research. Apply the industry popular words: swarm know container, but k8s better manage.

Kubernetes and swarm comparison of container cloud technology selection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.