Walking in the Clouds: CoreOS Practice Guide (i)

Source: Internet
Author: User
Keywords Docker coreos etcd fleet
Tags application applications based browser cloud cloud service cloud service providers clouds

"Editor's note" Docker and CoreOS are excellent "graduates" of the Silicon Valley incubator, and it is said that two bosses have a good personal relationship, Docker do container engines, CoreOS container management, cooperation is very happy, but with the release of Rocket gradually "parted". Although Docker and CoreOS are seeking "Jane", but Docker "Jane" is to strive to achieve the simplest use of users, CoreOS "Jane" is the pursuit of the ultimate lightweight, which will be the future of container technology, in fact, it is difficult to say. Starting today, Linfan, a software engineer from ThoughtWorks, will bring a "walk in the Clouds: CoreOS Practice Guide" series to take you through the CoreOS essence and recommended practice. This article is based on the first article: CoreOS overlooking.


Author Introduction:

Linfan, born in the tail of the IT siege Lions, thoughtworks Chengdu Office Cloudops team members, usually like in the spare time to study DevOps related applications, currently in preparation for AWS certification and promotion of Docker related technologies.


Introduction

Many people are beginning to understand that CoreOS is a news story from the end of July 2014: The Docker container's operating system Coeros released its first official stable version. After that six months CoreOS all the way. August mid-June CoreOS acquisition of Private Docker warehouse service provider quay.io,9 early in the month Digitalocean and CoreOS strategic cooperation, the end of September Microsoft Azure Cloud Services began to support CoreOS system mirroring, in the middle of October, British well-known cloud service provider Brightbox also joined the support of CoreOS system mirroring the camp , plus the names of the world's leading cloud service providers, including Amazon's AWS, cloud computing giants Rackspace and Google Computer Engine,coreos, which have previously supported CoreOS mirroring, are ubiquitous.

As an operating system that has only been released for more than a year (the first release in March 2013), CoreOS has already emerged from the cloud-related open source community and large-scale server clusters, competing directly with the mainstream Linux server operating system. As for the later Redhat offering the built-in container Management Service system Atomic, and canonical just launched the Ubuntu Core, gradually set off Containerops tide, a new era of cluster operation is open. And walking on the wave of the CoreOS is the pioneer of this trend, it appears far more than just "another Linux distribution", but a time of the idea of subversion.

This series of tutorials will start with the most basic concepts, along with large-scale cluster management and application of the two main lines of the container to understand the unique CoreOS system, so that no contact with the system users can quickly understand the essence of the function and recommended practice methods.


CoreOS is what

Simply put, it is a lightweight Linux distribution based on Chrome OS customization.

As an operating system, CoreOS uses a highly streamlined system kernel and peripheral customization to implement many of the features that require complex human operations or Third-party software support at the operating system level, while excluding other software that is not core to the server system, such as GUI and package manager.

In particular, CoreOS's attitude towards the package manager and the native support of Docker are worth mentioning. This is a lot of users accustomed to traditional Linux management in the first contact with CoreOS, the most unaccustomed place, because CoreOS did not provide off-the-shelf package management tools. A typical puzzle is that it's inconvenient to install software in CoreOS. In fact, CoreOS does not encourage users to install various applications directly above the operating system, but instead advocates running all services in a separate application container, where the application container provides the basic functional environment required by the application. This approach separates the operating system and application responsibilities more thoroughly, reducing the coupling between operating systems and applications, and enabling companies running these servers to update their online business faster and more cheaply.

CoreOS Walking in the clouds

It is no exaggeration to say that CoreOS is a cloud-born operating system.

This "Born for the Cloud" contains two meanings:

First of all, CoreOS's design standpoint fully considers the cloud ecosystem's distributed deployment, the large-scale scalable expansion (scaling) demand, we will fully realize this in the later content; On the other hand, CoreOS has a considerable reliance on a particular cloud environment, which starts the configuration service Cloud-init is highly customizable, CoreOS officially offers customized versions of virtual machines or cloud service providers based on Vagrant, VMWare, AZure, AWS, RackSpace, so the CoreOS installed locally directly via ISO is not available Cloud-init related functions, such as cluster self discovery and fleet management across hosts.

CoreOS User Experience

CoreOS's core idea comes from the Chrome browser's user experience: Quick Start, background updates, seamless updates across versions, a separate sandbox for each tab, a quick fix for a single tab crash, and no crash of a single sandbox process across the entire browser. Extended to the server, imagine moving a service hosted in an application container from one server to another, as simple as dragging the Tab page from one browser to another. And these are the experiences that CoreOS wants to bring to every user.

Faster boot Speed

Because light, so fast. As a modern network of servers, the CoreOS team to the server operating system to do the most streamlined, the results not only make the system and application of high separation, but also achieved a great start speed upgrade. According to official data, its system runs at only 114M of memory usage (author Note: This is official data, measured in the vagrant environment is only about 80M, less than advertised), only the common Linux server system half a little more (about 60%).

In addition, CoreOS uses the launchd inspired by MAC system SYSTEMD as the default system boot and Service Manager (CentOS 7 also uses SYSTEMD to replace past SYSV startup services). Compared with SYSV, SYSTEMD can not only better track the process of the system, but also has excellent parallel processing capabilities, coupled with the characteristics of on-demand start-up, combined with the Docker rapid start-up capabilities, in the CoreOS cluster large-scale deployment docker containers The performance advantage over other operating systems will be more pronounced.

Smooth Version Upgrade

Traditional server operating systems, including most Linux distributions, will be replaced every few years. During this time, developers will continue to use security patches and updates to improve the system, but will not make a particularly large changes, eventually the operating system and its software will slowly become rigid. But CoreOS's idea is to become an operating system that can be updated at any time, with no concept of upgrading across releases, but using an upgrade channel like Arch Linux (update Channel) and rolling updates, The system can be upgraded directly to the latest release at any time. Even during the entire update process, the application will not be interrupted. With CoreOS, the infrastructure is upgraded automatically, just like the Chrome browser upgrades that don't bother users.

CoreOS has two system partitions (dual root partition are translated into dual boot partitions, which should actually be system partitions, including directories such as/bin/sbin/lib, which are all read-only). Two partitions are set to active and passive modes and perform their duties during system operation. The active partition is responsible for system operation, and the passive partition is responsible for system upgrade. Once a new version of the operating system is released, a complete system file will be downloaded to the passive partition, and the original passive partition will be switched to the active partition when the system is restarted on the next reboot, and the previous active partition has been switched to a passive partition. In this process, the updated machine does not need to be removed from the load cluster. Also, to ensure that other applications are not interrupted, CoreOS will limit the hard disk and network I/O during the update process via Linux cgroups.

It is worth mentioning that, unlike traditional Linux servers, CoreOS system partitions are designed to maintain a read-only state during system operation, thus ensuring CoreOS security and further reflecting the CoreOS's attitude of not wanting users to install applications directly on the operating system. At the same time, the highly consistent system kernel and peripheral application versions within the cluster simplify the operational complexity resulting from versioning problems, making it easier to maintain the operating system itself.

Application of containerized

In CoreOS, all applications are loaded into a Docker container, like a software-code container, running on the operating system through the simplest interface. This means they can be easily moved between the operating system and the computer, like carrying boxes on ships and trains, while also meaning that the operating system can be updated without disrupting applications.

Docker is becoming increasingly popular as developers deploy applications to the cloud infrastructure. By providing operational resources to applications through a containerized (containerized) computing environment, applications share system cores and resources, but do not interfere with each other. The failure of a single container can be quickly restarted and the failure of the application within the container will not cause the entire system to crash. This idea is the same as the sandbox of the browser.

CoreOS Distributed System Services

Cloud problem, the most is from centralized to distributed thinking mode of transformation, distributed services, distributed deployment, distributed management, distributed data storage ... And these are part of the CoreOS that brought the server revolution.

To address these distributed thinking issues at the system level, the CoreOS team provides important tools to help users manage coreos clusters and deploy Docker containers.

Cloud-init

at system startup, CoreOS reads a platform-customized user profile (called Cloud-config) to complete the system's initialization configuration. With the information in the configuration, the new boot CoreOS server initializes the necessary service processes and automatically discovers and assigns additional server interaction information for the cluster, and then joins the cluster. This cluster based "Self discovery" organization makes cluster management simple and efficient.

In general, the Cloud-config profile should at least include the cluster communication address that the server belongs to, and the parameters of the service required to start ETCD and fleet. Users can add more customized services to the configuration as needed, so that the node becomes fully functional when it is started.

Etcd

The skeleton in the CoreOS cluster is ETCD. ETCD is a distributed key/value storage service in which programs and services in CoreOS clusters can share information or service discovery through ETCD. ETCD is based on a very well-known raft consistency algorithm: The election of the lead in the server to synchronize the data, and to ensure that the information within the cluster is always consistent and available. The ETCD is installed in the default form in each CoreOS system.

Under the default configuration, ETCD uses two ports in the system: 4001 and 7001, where 4001 provides the external application to read and write data in Http+json form, while 7001 is used to synchronize data between each ETCD. Users can further ensure the security of data information by configuring CA cert to allow ETCD to read, write, and synchronize data in an HTTPS manner.

Fleet

Fleet is a tool for controlling and managing CoreOS clusters through SYSTEMD. Fleet interacts with SYSTEMD through the bus API, registering and synchronizing data between each fleet agent through a ETCD service. Fleet provides a wealth of functionality, including viewing the status of servers in a cluster, starting or terminating Docker containers, reading log content, and so on. More importantly, fleet can ensure that services in the cluster are always available. When a service created through fleet is not available in the cluster, a series of services originally running on this server will be reassigned to other available servers by fleet, such as a host that is detached from the cluster because of a hardware or network failure. Although the current fleet is still very early in the state, but its ability to manage CoreOS cluster is very effective, and still have a lot of expansion space, now provides a simple API interface for user integration.

Epilogue

From the beginning of the next chapter, we will start from building a CoreOS cluster, step-by-step to familiarize yourself with the various aspects of the system. (Author/linfan revisers/Zhou Xiaolu)

Reference articles:

Server operating System CoreOS

CoreOS: Minimizing the customization Linux system

CoreOS Combat: Introduction of CoreOS and management tools

An Introduction to CoreOS System rs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.