Docker's pace: DevOps and OS-based

Source: Internet
Author: User
Tags dockercon

Docker's pace: DevOps and OS-based
GuideThe development of cloud computing over the past decade has provided new opportunities for the sharing economy in the IT field. The rise of mobile Internet over the past five years has posed new challenges to IT architecture in many aspects. New challenges, new opportunities, and new vigor. For a short time, new words such as Docker, microservices, DevOps, and lean R & D are filled with the entire IT industry.

The development of cloud computing over the past decade has provided new opportunities for the sharing economy in the IT field. The rise of mobile Internet over the past five years has posed new challenges to IT architecture in many aspects. New challenges, new opportunities, and new vigor. For a short time, new words such as Docker, microservices, DevOps, and lean R & D are filled with the entire IT industry. In the infrastructure field, the monopoly of giants, and the existence of technical barriers often restrict the inner and the latter. In the face of the evolving business needs, how can software providers respond to the same challenges as opportunities.

It is often the leader of the times, first sniffing the history before the revolution. We can see roughly:At the right time, new ideas always seem a little playful, and at the same time, they are still forging ahead. Behind our ideas, we can always find that some companies are trying to find things amazing. They are radical, they are pioneering, and they are from 0 to 1. The Docker company is not a great choice here.

So far, the history has given Docker more than three years. In the past three years, Docker will"Build, Ship, Run" is the company's purpose, that is, to help users Build, release, and Run any application.

By summing up the three years of Docker, we can easily find the pace of Docker:

  • In the first year, we focused on software construction, connected to downstream construction, and created an image ecosystem.
  • In the second year, the service container management and release scheduling platform were used to build the delivery process.
  • In the third year, enterprise resources are integrated, platform functions are improved, and application orchestration is started.

Now, over the last half of the year, we will continue to interpret Docker, and we will find that the development of Docker seems to not only focus on the "Build, Ship, Run" of applications, in addition, our efforts in two fields are somewhat "overwhelming ":

  • Promote the DevOps Process
  • Management capability towards OS
Docker promotes DevOps

In the IT field, DevOps is a culture that emphasizes enhanced collaboration and communication between development, O & M, and other teams to achieve fast maturity and security and control of software products. From the perspective of Docker's purpose, the DevOps concept seems to be very matched. Docker is fully capable of accelerating and ensuring the software lifecycle. From the perspective of industry development over the past few years, Docker, as a tool, is indeed helping enterprises practice the DevOps concept, but also with the polishing of this tool, promote DevOps in a larger group through visual value.

If we still use software construction, CI/CD, and so on to introduce the DevOps value brought about by Docker, it is not uncommon. If you follow the latest developments in Docker, you will not miss the explosive news of Docker native integrated orchestration. After DockerCon 2016 released the news, rumors and speculation about the competition for orchestration and the split of the container ecosystem were rampant. In my opinion, orchestration is just a form. Docker expects far more DevOps than that, and the current actions are actually more than that.

Native integrated orchestration

Docker launched Swarmkit, which is a piece of news about native integration of orchestration capabilities. I believe it is a unfriendly message for other distributed platforms targeting container Orchestration (such as Kubernetes, Mesos + Marathon. A tool and a vendor, with a large number of user groups in the container ecosystem, finally intercepted the northbound ecosystem. At first glance, this is true, but if you review this issue from the perspective of DevOps, you may have different gains.

DevOps is a new cultural concept. Driven by it, implementing DevOps brings both big and small values. It is generally hard to measure in the world. It is often just a simple comparison with the existing curing process. In the PaaS field, people are used to connecting DevOps. In addition, the existence of PaaS greatly simplifies the management of applications by traditional O & M personnel after release, therefore, platforms like Kubernetes are indeed sought after by traditional O & M personnel, and the release of O & M seems to have seen the dawn.

However, back to DevOps, the word exists, and the beneficiaries can be more than "O & M personnel ". For developers, there is also value. Some people may say: doesn't that mean that developers will take on more jobs, involving dirty jobs, hard jobs, and tiring jobs? If IT is a traditional IT architecture, there is not enough tools to assist, I am afraid so, or DevOps is not feasible.

Nowadays, many things seem simple enough in the Docker world. After solving network, storage, and security issues, Docker's Swarmkit helps Docker greatly lower the threshold for users to use containers and practice DevOps. So far, Software Delivery within most enterprises usually involves three departments: development, testing, and O & M. Docker's thinking is much simpler than imagined. It strives to be the simplest at the tool level. With only one Docker tool, it can complete development, testing, O & M, and other work. If Docker can provide a complete "End-to-End" tool chain only for resources occupied by developers, then engineers can be easily qualified for the DevOps role. Developers integrate the O & M concept into the development process and use the power of Docker tools to complete the entire software lifecycle process. The environment consistency and completeness of the orchestration functions brought about by Docker, such as development and deployment, are bound to greatly reduce the communication cost and resource overhead within the team. I think IT is impossible for enterprises to turn a blind eye to such obvious value orientation when making IT decisions.

DevOps is not limited to running PaaS from beginning to end. Compared with the huge PaaS platform for O & M, it is unknown whether the application O & M capability will put the cart before the horse, at least Docker is lightweight and the most convenient integration method, which provides a new idea for DevOps.

Development-driven monitoring

Docker implements users' orchestration requirements in a lightweight way. The appearance seems glamorous, but we may wish to think further about the general orchestration. Whether the orchestration similar to Kubernetes and Swarmkit focuses on application runtime management. If it is limited to runtime, it is limited to application O & M and lacks the source input from the development end, the gap between development and O & M is still impressive, with no difference.

Traditional PaaS platforms, such as Cloud Foundry and OpenShift, can basically manage the running of applications. However, the application lifecycle is often more complex than this. Subsequent monitoring, coordination, scheduling, and fault recovery are all difficulties to overcome. In traditional enterprises, these operations and maintenance tasks are undoubtedly the task of tracing developers when problems arise. At this time, in the context of traditional PaaS, a software life cycle can be more affected by the DevOps culture, which can greatly reduce a lot of costs. To give a simple example, in traditional PaaS and container orchestration platforms, application monitoring is often difficult to be universally accurate. For some applications, the general monitoring of the platform does not have a large granularity, but is like a flaw, that is, the fine-grained monitoring provided does not target the user's pain points. During the design of monitoring, O & M personnel cannot meet the "personalized" requirements of applications in a universal manner. Therefore, the trade-off is inevitable.

If you are concerned about the latest Docker 1.12, you may find that:

Dockerfile supports the new command HEALTHCHECK to complete the health check of the specified application.

Docker seems to be a casual move, but it is truly shocking. It bridges the gap between development and O & M, at least in the field of application monitoring. Docker has greatly released the pressure on O & M personnel, but the first step for enterprises to switch to Docker is Docker, that is, Dockerfile. This step is naturally within the scope of developers. In addition, no one knows more about personalized monitoring of applications than those of application developers. It is a pleasure to complete the definition of this Part if it is undertaken by application developers. From then on,The development stage completes the definition of custom application monitoring, and completes monitoring through the unified architecture provided by Docker. The monitoring during the O & M stage is no longer so stretched.

It can be said that at the beginning of Docker 1.12, it provided a new opportunity for application monitoring, bridge the gap between development and O & M, and break through the two sides of Ren du, which is often a traditional PaaS platform, the container orchestration platform cannot match.

Migration of Docker to OS

After the birth of platforms such as Kubernetes and Mesos, looking back at the past one or two years, it seems that the entire ecosystem has a subconscious: The container ecosystem is divided into two layers, and the container engine Docker is used as a management tool, as the underlying layer, it serves containers only. The Kubernetes or Mesos of the orchestration platform serves as the upper layer to meet various application orchestration requirements. I once thought that Docker is bound to go up the layer, so that others can sleep. However, Docker's actions are surprising, and the strategy adopted is:Docker is becoming OS-oriented.

Since the birth of libnetwork, Docker seems to have passed a message:Third-party tools are useless, and the kernel is used for power.

Docker is so popular that there are still unsolved secrets in the face of traditional resource management methods. If Docker uses the VxLan capability of the kernel to alleviate or solve the network problems in the Docker container world, problems still exist in the internal architecture of the enterprise, such as storage, such as server Load balancer. The problem must be solved, but in contrast to the development history of enterprise applications in recent years, when selecting the underlying hardware and software infrastructure, Operating systems (OS) with more trust are often used ), during the running-in process with the upper-layer cloud platform, there are more or less challenges. Therefore, it is not difficult to understand how Docker management is becoming OS-oriented. The future direction of containers is likely to break the boundaries between traditional IaaS and PaaS and return to the revolution at the broad cloud OS level.

Global Storage

For applications, the importance of data is self-evident. Separation of computing and storage has always been the most desired data management method for Docker. for Unified Storage Management, Docker has not provided a convincing solution, but is similar to ClusterHQ in the ecosystem, companies such as HedVig are also deeply involved in this field. However, Docker cannot be harsh. After all, this is not the strength and main business of Docker.

Docker cannot close the storage market of the container ecosystem. We can see from the abstract storage concept of Docker (Docker was born with only two first-level concepts: container and image, with the development of time, Docker also abstracts the storage Volume (Volume) and network, as a level-1 concept, parallel management ).

After more than three years of standalone storage volumes, Docker 1.12 now provides global storage volumes, which support data volume sharing in cluster environments. In addition to DockerCon 2016, Docker officially demonstrated how to use NFS to manage distributed data in a cluster environment. The container ecosystem has reason to speculate that Docker does not turn a blind eye to the storage field, but is very likely to use the operating system OS capabilities to switch into the storage ecosystem.

S load balancing

Today, most enterprise applications no longer have only one instance. The status quo of multiple instances can often avoid many problems, such as single point of failure (spof) and load balancing. In the Docker world, container expansion has never been a new topic. There is no final conclusion about the registration and discovery of extended application containers and services. Kubernetes and other platforms specifically introduce a platform routing component to complete this work. Because the network mode of Docker and the platform routing components in collaboration, there will be more or less unacceptable, performance and other aspects of the loss, so it is difficult to achieve the "1 + 1> 2" effect.

In the new version of Docker 1.12, you can directly use Linux IPVS to complete service registration and load balancing during application orchestration. Perhaps, the direct benefits of this initiative will be:

  • With kernel capabilities, no additional configuration, deployment, and management is required
  • Significantly improves Load Balancing Performance
  • Native supports load balancing capabilities of multiple transmission protocols (TCP, SCTP, UDP, etc)

The Ultimate simplicity is that, if underlying technology stacks such as Linux kernels can provide Load Balancing management capabilities, O & M personnel have no reason to install an additional load balancing module, expensive configuration, management, and operation costs have to be considered by team decision makers. In addition, compared with Nginx/HAProxy, IPVS has advantages in multiple aspects, such as UDP support, various load balancing policies, and health check.

Summary

I think it is most appropriate to use "Exploration" to describe the current Docker field. With a global focus on Software Delivery, Docker's contribution to the DevOps concept cannot be underestimated. In the face of Cloud computing infrastructure and platform architecture, the Docker idea may be more OS-oriented and gradually move towards Cloud OS. However, Docker, as the world's most popular startup company, has a wide range of eyes and diverse speculation, all gathered on this interesting whale. We will wait and see the future.

From: http://blog.daocloud.io/dockercevopsos/

Address: http://www.linuxprobe.com/docker-devops-os.html


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.