This is a creation in Article, where the information may have evolved or changed.
Objective
In the era of mobile Internet, new technologies need to support the environment, new software delivery process and it architecture, so as to realize the platform of architecture, delivery continuity and business service. Containers will become standard deliverables for next-generation applications, and the container cloud will help enterprise users build research and development processes and cloud platform infrastructures. Reduce operational thresholds by reducing the cycle of application delivery to the cloud. Accelerate the transition to Internet technology and business. The container cloud will connect to a variety of code hosting libraries for automated continuous integration and Docker image building, providing the foundation for next-generation application delivery and DevOps integration. The container cloud will support the application's one-click Deployment delivery, providing application lifecycle management services such as load balancing, private domain binding, performance monitoring, and more. is the ideal platform for micro-service architectures, lightweight application deployment and operations. The future IT community will use containerized applications as a delivery standard. The container cloud provides a platform for developers and businesses to quickly build, integrate, deploy, and run containerized applications, improving the iterative efficiency of application development, simplifying operations and reducing operational costs.
For enterprises, container cloud value is reflected in:
- As the internal technical link, multi-application unified deployment platform, improve development efficiency, reduce operation and maintenance costs.
- In the process of platform product, the investment is low and the output value is high. Depending on the existing IaaS platform resources, the container cloud will accumulate valuable customer resources and data resources in the way that the traditional projects benefit, and these intangible assets will ultimately win great strategic value for the company.
- Increase cloud enterprise identification, if only the cloud platform with IaaS layer services is not complete, the user's cloud service is not sufficient, should have to provide users with a full range of cloud service system, should have advanced and high value-added cloud platform.
Application Feature Architecture
The container cloud platform, based on the kubernetes implementation, can be divided into four parts in terms of functionality:
- deployment, upgrade and replication of automated containers;
- Expand or shrink the container scale at any time to realize the elastic expansion of the container;
- Run and manage containers across machines in a clustered manner, and provide load balancing between containers;
- Kubernetes's self-healing mechanism makes container clusters always run in the user's desired state.
Future container clouds may also be the rulers of cloud management platforms, and more applications will run natively on the container cloud. Because of the container cloud, it can abstract a lot of scattered physical computing resources into a large pool of resources, and use these resources to perform the user's computing tasks. For users, manipulating a decentralized cluster resource is like using a computer. The centralized outbreak of artificial intelligence, the machine learning training process for fast iterations of the high requirements, so that can be lightweight access, and can provide users with a second-level response to the container cloud has played a huge advantage. The division's Google TensorFlow project was also integrated with Kubernetes early on, and the AI platform implemented spark on Kubernetes and Hadoop on Kubernetes. Whether it's container technology, DevOps, or microservices, it's a model of flexible, resilient, lightweight it applications. The formation of this model for the rapid development of complex products has a positive significance, Google and other IT giants to the industry's contribution, so that the gap between start-ups and large-scale it enterprises shortened, ushered in the best it era. Container cloud architecture, with six functional modules, each containing the corresponding container technology stack. The specific structure is as follows
Storage Scenarios
The back-end storage is primarily Ceph-driven. Ceph is uniquely capable of providing object, block, and file storage with a unified system that is highly reliable, simple to manage, and free software. Ceph is powerful enough to transform your company's IT infrastructure, and manage massive amounts of data. Ceph provides great scalability-for thousands of users to access petabytes and even EB-level data. Ceph nodes are supported by common hardware and intelligent daemons, and the Ceph storage cluster organizes a large number of nodes, which communicate with each other to replicate data and dynamically redistribute data. The main application scenario for Ceph in the container cloud is stateful services. Data such as relational databases and NoSQL databases require persistent business.
Network Solutions
The underlying container network we started with Calico. Calico Basic Architecture Diagram Calico is a pure three-tier network that does not introduce a DP and does not have a packet. Inside the host to do another container, you can reach the terminal three, you can know who is out of the question, debugging time is very easy, very good management. Application data within the container is out, completely isolated from layer two, and for most of our applications it only requires three layers, and few applications handle two layers. And Calico supports a rich network strategy that enables multi-tenancy management, which is critical for the future delivery of container cloud services.
Container Orchestration Scheme
Kubernetes is Google open source container cluster Management system, is Google's many years of large-scale container management technology Borg Open source version, the main features include:
- 1. Container-based application deployment, maintenance, and rolling upgrades
- 2. Load Balancing and service discovery
- 3. Cross-machine and cross-region cluster scheduling
- 4. Auto Scaling
- 5. Stateless and stateful services
- 6. Extensive volume support
- 7. Plug-in mechanism guarantees extensibility
Kubernetes has developed very rapidly and has become a leader in the container orchestration field. Kubernetes provides a number of features that simplify the workflow of your application and speed up development. Typically, a successful application orchestration system requires strong automation, which is why Kubernetes is designed as an ecosystem platform for building components and tools to make it easier to deploy, scale, and manage applications. Users can use label to organize resources in their own way, and you can use annotation to customize descriptive information for resources, such as providing status checks for administrative tools. In addition, the Kubernetes controller is built on the same API that developers and users use. Users can also write their own controllers and schedulers, or they can extend the functionality of the system through a variety of plug-in mechanisms. This design makes it easy to build a variety of application systems on top of kubernetes. The entire kubernetes cluster is currently deployed in a highly available deployment with architecture such as:
Performance Monitoring Solutions
Container monitoring objects mainly include Kubernetes clusters (each component), application services, pods, containers and networks. These objects are mainly manifested in the following three aspects: 1.Kubernetes cluster's own health monitoring (5 basic components, Docker, ETCD, Calico, etc.) 2. System performance monitoring, such as: CPU, memory, disk, network, filesystem and Processes, etc. 3. Business resource status monitoring, mainly including: Rc/rs/deployment, Pod, Service, etc. for the overall health of the container and performance monitoring, the use of self-developed monitoring system to achieve the overall IT resources unified monitoring.
Log Collection Scenarios
The log system of the container platform generally includes: Log of the Kubernetes component, the event log of the resource and the log of the application that the container runs. This container cloud platform collects logs and sends the collected logs to the unified logging platform using FLUENTD (Daemonset boot).
CI/CD Solutions
The CI/CD (continuous Integration and Deployment) module shoulders the responsibility of DevOps and is a bridge between the development and operations personnel, which realizes the automatic launch of the business (application) from code to service, and satisfies the requirement of continuous integration and deployment of one key in the development process. This container cloud platform, docking continuous integration & release system. In addition, the implementation of service scaling capacity, elastic scaling (HPA), load balancing, grayscale publishing, including code quality Check (SONAR), automated testing and performance test plug-ins, these are important components of the CI/CD PaaS platform.
Cloud Platform Features
Through the visual interface, the deployment, management and monitoring of resources can be realized simply and conveniently, including: resource orchestration. The following functions can be implemented: 1. Comprehensive monitoring: Host, router, hard disk, public IP, load balancer all have a complete and comprehensive history monitoring and real-time monitoring information. 2. Open graphical Operation: Visual display of the host and its related resources, and can directly manipulate the graphics, all changes are automatically updated. 3. Operation Logging: Record the user all important operation history, easy to locate and find. 4. Network topology display: Graphic display of complex network topology, connecting private networks and host more convenient and intuitive. 5. Backup chain visualization: For the backup node at a glance, you can directly manipulate the new backup and roll back to the previous state. 6. Auto Scaling strategy: Define automatic scaling strategy according to resource monitoring information, adjust resource configuration or cluster size without human intervention. 7. Notification list: Used to receive monitoring alarm notification, timer task or automatic scaling policy execution results. 8. Rapid system Build: Using resource orchestration, you can quickly replicate a set of existing systems with complex topologies in just a few minutes, or quickly plan your system architecture and evaluate costs, and enable cross-region multiplexing of resource topologies. 9. Rich template Creation Method: In the console, users can build templates from scratch, or you can start from a common template recommended by the system, or continue to refine based on the templates you have created, or you can extract a set of topological relationships from existing resources into a template.
Summarize
The entire DevOps system derived from the
CaaS platform is critical. Finally realize the log, monitoring and APM data, the algorithm to achieve root cause analysis, that is, aiops. Ability to quickly locate faults. Rapid feedback to operations and development, forming a closed loop.
The Aiops platform defined by Gartner has 11 capabilities including historical data management (historical), stream data management (streaming data management), log data extraction ngestion), network extract (wire data ingestion), algorithm data extraction (Metric data ingestion), text and NLP document extraction (document text ingestion), automation model discovery and prediction (Automated pattern discovery and prediction), anomaly detection (Anomaly detection), root cause analysis (root cause determination), on-demand delivery (on-premise s delivery) and software service delivery (software as a service). This is, of course, a follow-up and ongoing work.