The author of Containership co-founder Phil Dougherty,containership is a cloud service provider that provides cross-cloud services, and before the establishment of Containership, Dougherty in the previous company has experienced the process of system containerized and distributed, the author believes that it is a painful experience, this article summarizes the experience of the process, as a container hosting need to pay attention to the six major issues to share to everyone.
The following are the translations:
Unless you've lived in the Stone Age for the past few years, you know the container and Docker. If you're an Internet geek, you're probably ready to use Docker in a production environment.
1 years ago I was also trying to keep up with the pace of technical updates, but ended up with a system that was busy delivering the original ecosystem. I have seen a certain degree of scars, but also let me reap a lot, more importantly, let me realize the shortcomings of many popular tools. It was not until we built a good hosting technology stack that my team had developed a lot of custom code for the integration of these technologies, and I was so dissatisfied with the design and operation of the system that I left there and became the co-founder of Containership.
Obviously I may be biased, but I think containership is very good, it can save you a lot of time, reduce a lot of pain. Of course, I will not force you to use it, and I will try to avoid bringing prejudice into this article.
1 Too many uncertain factors
MicroServices and service-oriented architecture models, advocating the separation of systems into many different, small, loosely coupled software modules, rather than a single whole. When it comes to developing a large software project, it's easier to divide the modules into different teams, and each team can develop with the technology that they are good at.
This understanding has even penetrated the infrastructure dimension. Take its essence, to its dross, which in theory can work, but the practice process may be a beautiful trap.
When your hosting platform is made up of a series of changing parts, each part is maintained by an open source project or company, you need to write a lot of custom code to blend it. While these custom codes must be maintained on their own, and when the problem occurs, it is difficult to determine which component is failing and perhaps no one can help, because it is time-consuming and difficult for team members to learn a large number of components. If the API changes or the new version is released, you can only make it work on your own.
I was originally walking along this road, an infrastructure team that served hundreds of very busy services and ended up in agony.
2 No High availability
There are several increasingly popular open source projects that completely ignore the high availability of their master servers, which orchestrate the servers that manage the remaining clusters. The scary thing is that these companies claim their products are the best way to run Docker in a production environment. If the cluster management system is not highly available, without the concept of leader election, running on a separate server, I cannot imagine that this can be called a production-level product. Regardless of which solution you choose, make sure it supports multiple master high availability, as well as some election mechanisms to determine which master is the leader or a good end mechanism.
3 Not open source
I was very miserable because I had to put my treasure in a non-open source project. If you don't have a similar experience, you might find it hard to believe. As an example of the Heroku scandal in genius, no one would have imagined that the files of their underlying hosting platform would be falsified, and the user would be plagued by the system's long response time. There are a lot of weird things happening in these non-open-source Docker management systems.
4 Non-essential network requirements
There is now a tendency to assign a unique IP address to each container on the host system through the overlay network. The biggest benefit of this approach is ease of use, but with the cost of latency and bandwidth throttling. Even some of the most professional container orchestration systems are selling this way to users. In the new implementation, things have improved, but the way to reduce performance to improve usability is not a good idea.
Another way is "port mapping", such as all containers running on a random port, to get the correct direction of traffic by calculation. The good news is that the port mapping problem is not really hard and you don't need to limit your performance. The purpose of using a distributed architecture is to improve performance, availability, and functionality, and not to compromise performance because of the wrong choice.
5 Hosting core business
Almost all large cloud providers have released their own container production solutions, such as AWS's EC2 Container Service, Google's Google Containerengine,joyent Triton, and so on. Unfortunately (my point of view), running your container load in a container host will violate one of the biggest benefits of containers: portability. The hosting service provider will do everything possible to keep you. In the past you didn't have much choice but to focus on the API of the configuration management system and multiple providers. Now the situation has changed, with open and vendor-independent options. I strongly recommend that you do not use a system that is not flexible and is not vendor-neutral as their primary goal.
Another option is the "Docker as a Service" provider, which provides the running of all important systems on their own network, and provides you with a simple running stand-alone server on a managed host, with clients connecting to their management system. But when you don't want to pay for these suppliers, or if their services are no longer geared to the growth of your business, you can't go back to free and open source mode. Containership can do that when you start a cluster in coutainership, the core of the system runs on your own server, and you can stop working on cloud services at any time, using an open source core system.
6 Mandatory operating systems
I used to be responsible for security and PCI DSS collaboration, and to deal with hundreds of monitoring and auditing requirements that followed, it was not possible to use the Micro Linux operating system because ids/Log/security software was not suitable for use on hosts that can only install and run software from Docker. It may not be a problem for you, but I want to be free to choose the operating system. Why use Docker with a specified operating system? Or why do you need to use an initialization system to run the container? I think it's a tight coupling you're trying to avoid. Flexibility and support for a variety of Linux operating systems is important, especially for businesses that want to be able to continue to use certain operating systems because they have already invested time training on the operating system and some services are deployed.
Summarize
These are just my suggestions, but they come from countless hours of research, development and practice of large-scale running Docker experience. Keep these experiences in mind when you plan to migrate to containers and distributed systems. Sometimes, the hype will let you embark on a road, from now on 6 months will not be able to work. Technology updates in the computing world are fast, but the best practices that have been precipitated over the years will not change. Be sure to trust your intuition that the container does not make a bad solution better.
The original published in the public number "container technology daily", welcome attention!
6 Big considerations for Docker hosting