Dockone WeChat Share (79): Building enterprise-class PAAs cloud platform practices based on container technology

Source: Internet
Author: User
Tags message queue disk usage
This is a creation in Article, where the information may have evolved or changed.
"Editor's note" The Enterprise Containerized PAAs platform is designed to provide the underlying support capabilities for enterprise applications, covering application development, application delivery, and on-line operations, including code management, continuous integration, automated testing, deliverable management, application hosting, middleware services, automated operations, monitoring alarms, log processing, etc. This sharing mainly introduces the related technology, the core function module and the related scheme, which is based on the container technology to build the PAAs platform.

To meet the above requirements, Mopaas Enterprise Edition is based on the cloud foundry and kubernetes and other open source technology framework and Intelligent Cloud Platform patented technology to build. In addition, the platform provides a variety of standardized or non-standard operating environment and a variety of operation and maintenance management functions, users can be in seconds to obtain a variety of resources and environment on demand, the platform's greatest value is to liberate the development, testing, operation and maintenance personnel, reduce the user's application software delivery costs and time.

Mopaas Enterprise Edition by Cloud Foundry provides standard operating environment, standard middleware service access mode, authentication authorization, soft routing, organization management, resource allocation and other functions. But Cloud Foundry's new runtime Diego supports running Docker images, but the functionality is relatively weak and the application architecture running in Diego must meet certain standards. Mopaas Enterprise Edition uses kubernetes to provide support for non-standard operating environments and middleware services, so applications that do not conform to cloud foundry architecture standards can be easily run into kubernetes environments. In this way, the Mopaas Enterprise cloud Platform is relatively flexible, with little restrictions on the application runtime and application architectures, so it is easy to migrate some old applications to the cloud platform. Mopaas manages cloud foundry and kubernetes resources for unified management and scheduling.

The Mopaas platform enables users to dynamically provision computing resources and deliver fast applications based on business needs, especially to help users significantly reduce the cost of IT spending and applications, shorten application uptime, and simplify it and application management. In addition, as a continuous innovation digital platform and business continuity assurance, Mopaas also helps enterprise users to effectively respond to changes in the market, through continuous innovation to maintain market competitiveness.

The overall architecture of the Mopaas Platform (Enterprise Edition) is as follows:

Application runtime environment and middleware services


Application running environment and middleware service is the core function module of PAAs platform, when users publish the application to the platform, they only need to publish the application itself, the operating system, software environment, middleware service etc. are provided by the platform, and are configured and deployed by automation.

Application Run Environment

The PAAs platform needs to support different types of applications, the application architecture and the software environment on which it depends. The Mopaas platform currently supports three major types of application operating environments, as well as three application publishing methods.

Three types of application operating environments include:

    1. Standard operating environment based on cloud foundry
      Standard operating environment by the platform pre-production, optimization of the relevant configuration provided to the user, suitable for the environment does not have personalized requirements of the application, users do not have to make their own environment and related configuration, the application of standard operating environment, only need to push the executable package to the platform.
      • Multi-language and framework: Covers go, Ruby, Python, Java, JS, PHP Multi-language operation and framework
      • Language and framework extension mechanisms and buildpack extensions
      • Service plug-ins and extension mechanisms: Provides a variety of foundations, such as code hosting (Git), Databases (MySQL, PostgreSQL, MongoDB), caches (Redis and memcached), Message Queuing (RABITMQ, etc.), Jenkins, and numerous third-party service extensions

    2. Docker-based Mopaas mirroring
      Mopaas mirroring, suitable for applications where the environment is configured with personalization requirements, users can build their own application images based on the Mopaas base image to run, and the underlying image configuration can be modified at build time. The Mopaas mirroring runtime saves users the time to build a base image, allows users to customize the application image, and enables more application types to run into the Mopaas platform.

    3. Docker-based custom mirroring
      Custom images require users to make their own operating environment, the user in the local production of the mirror after the push to the platform to run, custom mirroring method All environments are completely defined by the user, relatively flexible, can fully meet the individual needs of users.


Three ways to publish your app:

    1. Executable Package Release (war, Zip)
      The user first needs to package the application compilation into an executable package locally, then apply the runtime environment and middleware services on the platform, push the executable package to the platform via Web UI or command line operations, and the platform receives the executable package and builds it into an application image after it is published to the platform to run.

    2. SOURCE Release
      The user submits the code to the platform's git repository or to a git repository associated with the platform application, and then publishes it to the platform after a continuous integration process is triggered manually or automatically after the application is built into a mirror.

    3. Image Publishing
      Users push their own images to the Mopaas platform for Publishing.


Middleware Services

Middleware services are essential software resources for application operation, such as databases, and the platform provides middleware services in two ways:

    1. Middleware services provided within the platform
      The middleware services provided within the Platform operate as containers and are currently running in Kubernetes, providing a cluster or master-slave approach to ensure high availability of middleware services. Middleware services run in the container and the application is very different, we can require the application in the design as far as possible, but most of the middleware needs to store data locally, can not be stateless, so the key to running middleware services in a container is to solve the problem of data local storage, At present, Kubernetes supports a variety of persistencevolume, which can be used to store data to the external network storage service to solve this problem.

    2. Platform access to middleware services provided externally
      Some third-party middleware services are temporarily unsuitable for running to containers, especially those provided by third parties, such as Oracle, or with specialized operations teams, such as middleware that can be deployed independently of the platform and accessed through a loosely coupled approach to the platform. The specific options are as follows:


Apply external access Mode

Applications deployed on Diego or kubernetes are run as containers, containers are dispatched and managed, so when we access the app, we don't have direct access to the container's ip+port.

Cloud Foundry dynamically updates the container's ip+port to route-managed routing tables for external access, and the HTTP (S) protocol is supported by default. Kubernetes's service provides a powerful capability to provide Clusterip as an external access interface for pods and to provide soft load balancing. However, the cluster IP address of the service can only be accessed within the cluster, and may be exposed externally through Nodeport mode. Once Nodeport is used, all nodes of the entire cluster can access the application via the specified port.

If a domain name is required to access the application, simply register the ip+nodeport of the Kubernetes node with the route table of the route. If the application needs to support TCP protocol access, containers managed by Diego can dynamically update the container ip+port to front-end load balancing. For Kubernetes, dynamically update the cluster's node Ip+nodeport to the front-end load balancer. The implementation principle is as follows:

Mopaas platform provides independent Router Proxy module, dynamically load routing table, support TCP and HTTP load balancing, domain map is flexible and can be changed quickly.

The application of CI/CD in PAAs platform

The platform integrates git as the source management repository, and automatically assigns a Git repository to the app when the user creates a new app on the platform. The application can also be associated with a git repository outside the platform, and once associated with the application, the source code can be automatically published to the platform to run the entire process.

A continuous integration process is required prior to release, the continuous integration process can be pre-defined, and different delivery teams may have differences, and only all phases are approved for successful release. In the development testing phase, you can turn on the auto-build feature, automatically trigger the build process when code changes, and feedback the integration status and results in a timely manner.

The functions are as follows:

The platform uses Jenkins to configure the entire continuous integration process, which consists of code quality checks, unit tests, builds, and deployment phases. When triggering a deployment, first determine whether the current application has created a corresponding continuous task on the Jenkins platform, trigger the first task if it already exists, and create the task if it does not exist. A successful execution of each task triggers the execution of the next task and all the task execution processes that comprise the continuous integration process. The status of startup, success, and failure during task execution is fed back to the platform via Webhook, and the platform records these states and changes the execution process based on the results of the execution.

The implementation principle is as follows:

Continuous integration helps us build application software frequently and automatically, and it needs to be published to the platform after the build is complete. As frequent releases are applied to the production environment, the risk increases. So in order to make continuous release, we need to ensure the security of the publishing process through the mechanism of grayscale publishing and rollback.

The Mopaas platform supports grayscale publishing and application rollback strategies to achieve smooth transition of application release, ensure the stability of the whole system and avoid the impact of frequent release to users.

The implementation principle is as follows:

On-line process principle is as follows:

Elastic scaling and automatic operation and maintenance

Elastic Scaling

The Mopaas platform provides two ways to achieve elastic scaling: manual and Automatic.

According to their own needs, users can set up relevant timing or cyclical strategy, at the right time to increase or decrease the resources, thereby saving manpower and capital costs, to ensure that the application healthy and smooth operation.

Manual mode requires man-made triggering, horizontal and vertical scaling, lateral scaling refers to the number of instances of the application is adjusted, that is, the number of containers that comprise the application cluster, the platform provides intelligent load balancing.

Vertical scaling refers to the expansion of the application of a single container resource quotas, such as memory, CPU and so on.

The Mopaas platform is now available in a lateral scaling mode. The platform can expand or shrink the application horizontally, depending on the usage of the application resources, such as CPU, memory, or access conditions, such as: QPS. After the automatic expansion, the platform will be combined with the user's own configuration of the strategy, joint monitoring of data, automatic creation and release of instances, the whole process without human participation, to achieve reasonable utilization of resources to maximize. By setting the minimum number of instances, the real-time availability of the application can be guaranteed.

The implementation of automatic elastic scaling is as follows:

The route component logs the access log to a file and sends it to the message queue via Logagent real-time capture access log data. The log structure includes the domain name, request time, Method, URI, protocol, response status, downstream traffic, request source, browser information, request container information, response time, and so on.

These log data can be used to calculate the external access to the application, downstream traffic, success rate, response performance and other indicators.

After the Mopaas collects these access log data, it is counted, analyzed, merged, and cleaned up by executing the database stored procedures regularly.

Mopaas statistics The average amount of concurrency per second in the last 10 seconds, calculates the number of instances required for the current concurrency based on the user's pre-set elastic scaling rules, and then implements the elastic scaling function for the container platform such as Cloud Foundry and kubernetes by issuing instructions. The functions are as follows:

Automated operation and Maintenance

Automated operations mainly include health checks and failure recovery.

Health checks include automated monitoring and detection of the platform, triggering related events as soon as the monitoring is over-performance or downtime. If the application fails, the instance is restarted, and when the physical machine fails, the instance is migrated to another host, which is equivalent to a deployment process.

On the basis of traditional monitoring, to increase the health check mechanism, the user can see the entire process at a glance of the operation of the various nodes, automatic instance restart and migration, alarm a large number of reduced, manpower reduction, only automatic recovery failure when the alarm. Not only reduces the human input, but also a cost-cutting performance.

Monitoring Alarms and log management

Container monitoring data running in cloud foundry is collected uniformly by the Doppler component, and we can collect performance data for all applications through the Firehose component, such as health status, CPU usage, memory usage, IO, disk usage, and so on. Containers running on Kubernetes are collected by cadvisor and aggregated to Heapster,heapster to store data in the time series Database Influxdb, which is displayed by querying the database and alerting based on pre-set thresholds.

During the development test phase, the log hits the standard output stdout, which can be provided by the platform Webconsole real-time output, if you need to persist the log information, the platform provides a log4j SDK, this SDK supports the log output to a message queue, the application needs to persist the log, Only through this log SDK to output the log, the log is hit in the message queue, by the Elk storage log, the platform can be provided by the Elasticsearch interface to search for the required log.

Design principles for Applying cloud

Among the many PAAs projects we have implemented, the biggest problem is migrating applications to the cloud platform. Since some application migrations require the application architecture, software version and so on to be consistent with the non-migration, so not 100% of the application can be left intact to the PAAs platform, platform support for the non-standardized operating environment has largely supported the different environments require different architecture of the application migration, But if possible, for example, if the application is allowed to modify a dependent environment or application architecture for better migration to a PAAs platform, or a newly developed application, such an application would ideally conform to some of the cloud-based design principles, as follows:

    1. Container and application instance positioning
      • Decoupling of containers and instances from IaaS and physical machines, relying on domain names or configuration management to locate them

    2. Data persistence
      • Persistent data is stored in DB, NFS, or other shared storage
      • Log saved to a third-party service
      • Internal data does not have to be persisted and migrated when instance is migrated

    3. State management
      • The platform does not provide state save management, and does not rely on the proxy session hold function
      • State to third-party services to support horizontal scaling, load dynamic equalization, and failover

    4. Optimize MTTR
      • Make it faster to restart and rebuild instances


Q&a

Q: How can I differentiate between vertical expansion or horizontal expansion when expanding elastically?

A: Automatic expansion is the level of expansion, based on the concurrency, CPU, memory real-time data analysis to achieve.
Q: Are all capacity expansions horizontal? It is not clear how the vertical expansion of automation should be judged, what is the idea?

A: At present, all the expansion is horizontal expansion, to vertical expansion can only be manual, because the vertical expansion of change in memory, the CPU after some cases need to re-distribution of the instance.
Q: Horizontal expansion is just an example of adding an instance, right. How do you judge a shrinking capacity?

A: The same is true for shrinking, depending on how many instances are needed for the amount of concurrency in the last 10 seconds.
Q: Is the expansion now based on the monitoring data of the underlying monitoring data and the business being considered at the same time?

A: The platform is mainly based on the basic monitoring data for capacity expansion, business data if it is also a factor that leads to application performance bottlenecks, it is relatively simple to incorporate into the expansion strategy.
Q: How is the solution to the data shared volume considered, based on cinder or ceph, to create a data shared volume to format the file system first, how to implement and before node node mount pre-formatted, node node failure, container Rebirth shared volume How to mount to the new node?

A: Shared volume is directly attached to the container, not mounted on the host, Glusterfs, Ceph, NFS and so on are supported.
Q: The grayscale publishing feature that I now understand is divided into rolling upgrades and AB tests, which are currently implemented in the kind of the AB test specifically how to understand it?

A: Are you talking about blue-green publishing? Blue-Green release should be guaranteed without downtime, the old version of the update to the new version.
Q: Excuse me, kubernetes a domain point to an application, b domain name point to another application, all use to 80 and 443 port, this How to do?

A: The front end plus a route, the domain name and ip+port forwarding.
The above content is organized according to the August 23, 2016 night group sharing content. Share people Shen, director of product development, Mopaas. 11 It experience, earlier in the domestic group engaged in the cloud foundry and other PAAs technology related research and development work. Good at Java, EE, Spring, Hibernate, Struts, JPA, Cloud Foundry, Docker, Kubernetes, RabbitMQ, Jenkins and other technologies. 2011 joined Mopaas responsible for product development and management work. Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.