Dockone WeChat Share (106): The Vision Cloud based on Kubernetes PAAs platform construction

Source: Internet
Author: User
Tags tag name hosting nginx host docker hub etcd docker registry nginx load balancing
This is a creation in Article, where the information may have evolved or changed.
"Editor's note" This share mainly introduces the evolution of the two-generation PAAs platform of the music vision Cloud, focusing on the architecture design and problems encountered by the second generation PAAs platform Leengine.

Background

2014 The Lego cloud began to try to promote and use Docker, and our team began to develop the first generation container cloud Platform Harbor (share URL: http://dockone.io/article/1091). (Here is a reminder that this is the same name as the Registry erver Open source project that the VMware China team designed for enterprise users Harbor).

The first-generation container cloud platform can be considered an open hosting platform. Developers can add themselves from the company application of virtual machines or physical machines to the harbor system for managed management, the platform basically includes: Mirror auto-Build (CI), application of rapid expansion, scaling, grayscale upgrade, resource Rights Management, multi-cluster host management and other functions.

Because at that time the container has just arisen, just start in the company to promote also have some resistance, just start to the line of business promotion, need to introduce Docker first, introduce the difference between container and virtual machine, then introduce the platform itself and use method, the promotion cost and the line of business learning cost is also relatively high. Most of the business on the Access Harbor is the line of business itself providing virtual machines, while the physical machines account for a few. However, given the ease of use of harbor, it has attracted a lot of new business access. So far, harbor has fully realized the development of self-help. Business itself to add the host, self-management applications and containers, the application to upgrade, rollback, elastic scaling, mirroring construction, has now been running stably for more than 2 years.

The first generation of container cloud platform deficiencies

    1. Network is also used in the most basic NAT, Port mapping mode, there is a performance problem. The business must know the port mapped out by the container, opaque to the business, not easy to alarm and operation, and not easy to load balance.
    2. Container distribution scheduling all of their own development, the task volume is large, then did not be able to do container automatic migration.
    3. Global deployment of apps is not supported.
    4. Harbor management of the computing resources require the business line to apply for itself, the computing resources can be physical or virtual machines, resulting in business lines need to care about the underlying computing resources, computing resources can not be completely transparent to the line of business.
    5. In order to reduce the user's learning cost to the dockerfile, we encapsulated the dockerfile, only let the user write shell script, because the package is unreasonable, resulting in the mirror is too large, especially MAVEN project, need to compile, each time in the Docker build, Will re-download the dependent packages, resulting in a long compilation process, large mirrors, and inflexible service initiation in the container.
    6. Monitoring alarm mechanism is not perfect, not to achieve the container and application level of monitoring and alarm.
    7. The mirrored warehouse registry does not have the same user rights as the Docker hub.


As Kubernetes began to use more and more companies, more teams and lines of business within the company began accepting or proactively learning about Docker, and in order to address the existence of first-generation platforms and the deployment of existing services based on the music vision, by the end of 2015, Our team plans to replace the previously written scheduling scheme, to try to use kubernetes as a container scheduling engine, after comparing multiple sets of network scenarios (Calico,flannel, etc.), combined with the existing state of music, using bridge mode, the container of the second tier network solution. Load balancer uses Nginx, computing resources all use physical machines, computing resources are completely transparent to the business. After more than half a year of research and development, in October 2016 the second generation of PAAs platform Leengine in the United States, half a month after the Beijing online. Leengine now maintains a version of the development iteration speed of one month, and has developed 3 versions so far.

The Leengine uses a new architecture, primarily for stateless or RPC applications. Now has to undertake the music vision cloud computing, music vision Sports Octopus TV, The Wind live, music watching network music look search, cloud album and so on nearly 100 important business, the use of customers generally reflect once mastered the use of leengine process, from development to on-line deployment, elastic scaling, upgrade efficiency can multiply, Greatly simplifies the operation and maintenance costs. Leengine has a strong user stickiness, attracting a lot of lines of business to actively apply for trial leengine, there is no need to add additional energy in the company's internal promotion.

Brief introduction

Kubernetes:google Open Source Container Orchestration tool, in the entire 2016, more and more companies began to use Kubernetes online, while Kubernetes with our urgently needed container automatic migration and high-availability features. The architecture of Kubernetes is not covered here, although the architecture is biased, but we finally decide to use it and try to use only its Pod,replicationtroller and service capabilities.

Here are some concepts to explain first:

User: Products, development, testing, operations and other personnel under various product lines.

Region: A preference for geographical concepts such as Beijing and Los Angeles are two region. The same region requires intranet access, reliable network and low latency. The same region shares a set of mirrored registry, mirrored build systems, load balancing systems and monitoring alarm systems, sharing globally unique SDNs and Gitlab code warehouses in different region.

Cell: We now use the Kubernetes 1.2.0 version, theoretically able to control 1000 compute nodes, for the prudent use of a kubernetes cluster maximum compute node will be controlled at about 600. The introduction of the cell concept is to expand the size of the compute nodes in a single region, in favor of the concept of the room, a cell is a kubernetes cluster, each region can build multiple cells. All cells share the mirrored registry, mirrored build system, load balancing system and monitoring system under this region. Configure one or more network segments for the container under the same cell, dividing each segment into separate VLANs. Compute nodes under the same cell do not deploy across machine rooms.

Leengine Registry: Based on a partial modification of Docker Registry 2.0, the backend supports Ceph storage for the music vision Cloud. And in the same way that Docker hub adds permissions and authentication mechanisms, only users with the appropriate permissions can push and pull on specific images. You can also set the image to be exposed, and any user that exposes the image can pull.

COMPUTE Nodes: Physical machine, kubernetes node concept.

Application: Defines a set of containers that provide the same business logic as an application, which can be thought of as a microservices. This type of application requires a stateless Web service or RPC class service. Apps can be deployed in multiple cells. As mentioned above, a cell can be considered a computer room. Leengine will deploy at least 2 cells in a region and deploy the application, we require the application to be deployed at least 2 cells, so that even if a computer room has a network failure, the application container in another room can continue to provide services. You can deploy multiple versions of a container under one application, so you can support the grayscale upgrade of your app. When accessing Web-class applications, we enforce that load balancing must be used in front of such applications (if they are online), and our service discovery system tells the load balancer which container IPs are currently in use. From the kubernetes level, we stipulate that an application corresponds to a namespace under Kubernetes, so there will be a namespace field in the database table of the application and need to be globally unique. Several versions of the application correspond to multiple Replicationtroller created under this namespace.

The relationship between region, Cell, and Kubernetes:

Architecture Platform Design

The container runs directly on the physical machine, the compute nodes are all provided by us, the line of business does not need to be concerned, leengine can serve as an enterprise solution to the external full output, the platform architecture is as follows:

Business layer: The Leengine uses the various lines of business for the container and is the end user of the company.
PaaS Layer: Leengine provides a variety of services, mainly to complete the application of elastic scaling, grayscale upgrade, automatic access to load balancing, monitoring, alarm, rapid deployment, code building and other services.

Host resource tier: mainly refers to the Docker physical machine cluster and includes the management of the IP pool.

When the user accesses the application deployed on the Leengine, the SDNs intelligently resolves to the corresponding Nginx load Balancer cluster, and the request is then hit to the corresponding container by Nginx. databases, caches, and other stateful services are not within the leengine system, since the use of a sophomore network allows the container to connect directly to the database or cache services provided by other teams in the company.

is to better illustrate the support for multi-geo, multi-kubernetes cluster deployments.

Single cell deployment diagram for single region:

We divide the management Network and container network of compute nodes into separate VLANs for the container network.

Members, Rights management

Leengine The following defines four resources, application, mirroring, image grouping, and code building. For the team to work together, these 4 large resources have increased membership and rights management. Members and permissions are modeled after Gitlab, and the member roles are divided into: Owner, Master, Developer, Reporter, Guest. Different roles define different permissions for different resource systems. For example, only owner and master have permission to deploy new versions of the application, elastic scaling, and so on. If a user a creates an app A1, then a defaults to the owner of the app A1, with all the operations permissions on the app A1, including deploying new versions, scaling, modifying apps, deleting apps, and so on. User B at this time on the application A1 is not visible, if you want to be visible, must be a to this application A1 to perform the operation of adding members, add B to A1, and assigned to any role other than owner, if at this time B is assigned to the master role, that B has a new version of the application A1 deployment, elastic scaling and other permissions, Otherwise, it is not.

According to the above permission design, through the Leengine console interface, different users see only the resources related to themselves, such as, in the application, can see I created and I participate in the application:

On the Mirror page, you can see the images I created and the mirrors I participated in, such as:

The Help document gives the user permission descriptions for different resources:


Client side and management side

The Leengine has a user-facing console interface and a boss interface for Operations manager, where users can see the 4 different resources they create and participate in. Management side is mainly for the entire Leengine platform resources management, including the user can use the maximum resource constraints, load Balancing special configuration, cell cluster resource usage, operating frequency statistics and so on.

is the Leengine test environment boss system about operating frequency statistics:

The frequency of operation includes the number of deployed applications per day, the number of times the code was built, the number of push times of the mirror, and the number of flex times, to some extent, to show how often the line of business is used on the Leengine platform itself.

Leengine-core

Leengine-core is the API interface layer (Beego implementation) that Leengine ultimately provides services to the outside, and all 4 resources, including permission control, are controlled through this layer. Leengine only provides the most atomic API interface, special line of business to have special needs, can completely on the basis of the existing API two development.

Container Network

The container uses a sophomore network, so that external services can be directly connected to the container, the container can also directly connect to external services, such as databases, caches and so on. With this scheme, the container can be connected horizontally and can be accessed vertically. The external connection container can be connected directly through the container IP address, or it can be accessed through load balancing. The container can also directly access virtual, physical machine resources, as well as MySQL and other component Services outside the Leengine system.

We have written our own and CNICTL management tools, support the addition of multiple IP segments to prevent the lack of IP resources. The information for the IP segment exists in the ETCD of the current kubernetes cluster. We will add at least one IP segment per cell, each kubernetes cluster, typically 1024 IP address 22 subnet masks, and separate VLANs to prevent broadcast storms, which requires planning IP segments with the network department in advance. If this IP segment is already used, we will use the Cnictl tool to re-add a new IP segment.

In order to further ensure the stability of the business container in the network, all of our compute nodes are 4 network cards, 2000 trillion, 20,000 trillion, both do bond, gigabit Bond1 used to do management network card, million trillion bond1 used to run the business container, each compute node will create a OvS bridge at the time of delivery, And the Bond1 mounted, the top switch to do stacking, compute nodes as far as possible scattered in a number of different cabinets.

Kubulet on the physical machine of compute nodes after creating the pause container for the pod, we call our own board of distributors, which will create a Veth pair, throw one end into the namespace of the container, and name eth0, one end mounted on the OvS bridge, Then, from the large IP segment in the ETCD, find a small segment of 16 consecutive IP addresses to the compute node, and then from this sub-segment to find an idle IP to the container, configure the container IP, and routing information, and according to the configuration to determine whether to send free ARP information, Thereafter, the relevant information is returned to the Kubelet by complying with the MLM regulations. When the compute node creates a new pod again, the IP of the space is selected from this sub-segment, and if there is no idle IP address, a sub-segment is recalculated to the compute node.

Now there is no guarantee that the pod will be deleted when the IP remains unchanged, so the container IP address changes after each upgrade operation, which requires our service discovery to interface with load balancing.

However, there are some problems with this scheme: for example, the physical host suddenly down, or the Docker process is dead, causing all the containers on this host to hang up, and so on after the Kubelet restart, the original dead container occupied by the IP will not be released. Our current solution is to conduct periodic inspections through our development of the CNICTL command. CNICTL provides a check command that retrieves all assigned IP and pod information in the ETCD, and then calls Apiserver to obtain all pod information, with the difference being an IP address that is not released. After receiving the alarm, manually call the release IP function of cnictl and release the IP by hand.

Service discovery

We take full advantage of the service concept of kubernetes, mentioned earlier, an application corresponding to a namespace, a version corresponding to an RC, when the user through the API request to create an application, Leengine Core API Layer: Leengin-core will create the associated namespace by default in the corresponding kubernetes cluster, creating a service under this namespace by default, and creating a unique label attribute. When the user deploys the new version (RC), Leengine will add a unique label to the RC for this service. This allows the endpoint of the backend to be discovered through the service. We use go to write a service discovery service, through the Watch Api-server API interface, automatic classification to find which application has IP changes, then call our load-balanced API interface, dynamic change Nginx backend upstream ServerIP.

Before we use the Kubernetes health detection function, there will be a certain probability that the container is created, the service is not fully started, when the container IP has been loaded into the load balancer situation, if this time, if there is a request to come, there will be a request failure phenomenon. In the latest version, we added a health detection feature that allows users to specify their own service's monitoring and probing HTTP interface when they deploy a new version of the application, which is then added to the load balancer when the container service probe succeeds. This is not the case when deleting a container, and after executing the RC indent command, the container that needs to be removed is immediately removed from the load balancer before the container's delete operation is performed.

Load Balancing

Instead of using kubernetes proxies as load balancing, we use nginx clusters as load balancers. Nginx in principle, we will be deployed in the same region in a number of engine rooms, to prevent the computer room network failure caused by all nginx is not available, support nginx horizontally expandable, when load balancing pressure is too large, can quickly horizontal increase nginx physical machine. To prevent an excessive number of domain agents under a single Nginx cluster, and to differentiate between different business logic, such as public and intranet load balancing, we support the creation of multiple nginx load clusters.

Browse the request path for the user.

about how to be able to notify Nginx cluster Automatic update upstream server IP problem, we use Beego framework outside the Nginx cluster design a layer of API layer: Slb-core, specifically to provide API interface, the specific structure is as follows:

ETCD contains configuration information for each domain. The specific key structure is as follows:
/slb/{groupname or groupid}/domains/{domain_name}/

An agent is installed on each nginx host, each agent monitors the key of the groupid he belongs to, such as/slb/2/, which can monitor the configuration changes of all domain under this nginx cluster, the agent will change the domain Configuration updates to the directory under Nginx, and determine whether the configuration changes are upstream under the change of the server IP or other configuration, if found to be other configuration changes, such as location or add header, will reload Nginx, If you find that simply changing the upstream server IP, call Nginx to dynamically change the interface of the upstream IP.

The Slb-core on the upper level will provide domain dynamically changing the background upstream IP interface for service discovery calls.

If a number of interrelated, mutual invocation of the business domain name at the same time by an Nginx cluster proxy, if one of the domain needs to change the configuration, resulting in the need to reload Nginx, it is likely to have a reload time too long problem.

Based on this architecture, the back-end Nginx host can be quickly scaled up and quickly added to the corresponding cluster. Due to the addition of the Nginx test mechanism, when the user changes the domain's configuration there is a syntax problem, does not affect load balancing, and will maintain the correct configuration. Now this architecture is already a common architecture for the Lego cluster, hosting the load balancing of thousands of domain names on the music vision.

When you create an application, you need to fill in the Load Balancer domain name, nginx cluster information such as:

After a successful creation, you can know the application load-balanced CNAME and VIP information by viewing the app's load-balanced navigation bar, and then configure the domain name to take effect on the DNS system after the business test succeeds.

Such as:

Allows users to view load balancer nginx configuration information:

At this stage, the Nginx load Balancing cluster is not divided according to an application, most of which is an nginx load Balancer cluster agent for many groups of domain. Therefore, for the client, in order to prevent the line of business users malicious or unintentional random changes, resulting in the entire Nginx cluster problems, the current user is not authorized to change the Nginx configuration, if you need to change the special configuration, need to contact the administrator, configured by the administrator. Later on we will consider assigning a set of Nginx containers to a load balancer proxy for an application, so that users will have the highest permissions for Nginx configuration changes.

Leengine Registry

Leengine Registry is our in-house mirrored warehouse, modified by Docker Registry 2.0: Docker distribution to support the AV ceph back-end storage while using Auth-server As well as the Leengine Authority mechanism, defines the user and the permission partition. Allow mirroring to be private and public. Exposed mirrors any user can perform pull operations. Private mirrors can add team members, members of different roles have different push,pull, and delete permissions. Based on the above design leengine Registry can be completely independent of the Leengine external Mirror warehouse Services, line of business if you do not want to use leengine code-building capabilities, you can completely use their own construction method, build a mirror, and then push to Leengine The registry.

For a related operation on mirroring tag:

Members:

Dynamic:

Leengine 4 Resources: Application, mirroring, image organization, code building, in each resource page is a dynamic column, used to record the operation record, convenient for future problem tracking.

Code Building

To quickly build the code into an image and push it into a mirrored warehouse, Leengine sets up a set of specially built Docker physical machine clusters to perform the build task, with one agent installed per build physical machine.

The code-building framework for Leengine is as follows:

After the agent on each build physical machine starts, it will automatically register its information to ETCD, which means that when the agent stops unexpectedly or the host hangs up, the registration information in ETCD will be invalidated, indicating that a build machine is already offline. In addition, the agent will monitor its own mission key, to receive Leengine-core issued by the construction task.

When a build request is called, Leengine-core's api,leengine-core will pick a suitable build machine from the ETCD (usually hash, as far as possible to ensure that a code built on the same physical machine build, speed up the construction speed), The build task is then placed into the task key of the build machine, and the corresponding agent monitors the key changes and executes code clone, compile, build, and push operations.

Etcd plays this very important role in Leengine, which is the message bus for each module communication.

Since the MAVEN project needs to be compiled, in the first generation Harbor system, the compilation steps are placed in the Docker build process, because MAVEN compilation requires a lot of mvn dependencies, each compilation will need to re-download dependencies, resulting in a long compilation time, the size of the compiled image, So before we build, we'll start a container, compile the code in container, and map the MVN-dependent directory to the host so that the same code is built to share the same mvn dependency each time it's built, without having to re-download and speed up the compilation. At the same time, we can ensure that different code constructs use different MVN dependent directories, ensure that the compilation environment is absolutely pure, and finally only the compiled binary is hit into the mirror, to a certain extent, to ensure that the code is not compromised.

Code building is limited to git-only code, and it does not support SVN at this stage, so some business if the code is on SVN, you can use other tools to make the code into a mirror and push it onto the Leengine registry. Because of the Leengine registry on the mirror and Mirror warehouse to do the authorization to ensure the security of the image.

Code Building supports manual build and auto-build, and sets up the appropriate web hooks on Gitlab so that users simply commit code and automatically trigger the Leengine code-building function.

To build a Web page for code creation:

Using the Leengine code build, we require that the user's code need to write their own dockerfile,dockerfile from the root image can be used in the Leengine Registry public image, you can also use the public image of the Docker hub.

If the business code needs to be compiled, you can specify the compilation environment, and the compilation script needs to be put into the code.

To build the page manually:

If the tag name is not written, Leengine will be the mirror tag name based on the commit number under the current branch. If you choose Mvncache, the build will use the MVN dependencies downloaded from the last build to speed up the build.

To build the results:

Build process:

In the build log, each key link records the time-consuming period and the specific execution process.

Application Management

Application support collapsed engine room deployment, multi-version grayscale upgrade, elastic scaling and other functions. We stipulate that an application corresponds to an image.

Deployment version (a version of an RC), you need to specify the image tag, number of containers, CPU, memory, environment variables, health check, and so on parameters, the current health check we only support HTTP interface mode:

Because most of the upgrade, is to change the image tag, so each time the new version of the deployment, Leengine will be in the new pop-up box, the previous version of the number of containers, CPU, memory, environment variables, health checks and other parameters automatically populated into the popup box, the user just select a new image tag can be. So the final grayscale on-line, only need to create a new version of the new version of the container fully started, the service automatically added load balancing, and then delete the old version.

To view application events:

Application events primarily collect event information that is applied to the Kubernetes cluster.

Monitoring, Alarm system

Monitoring alarms We have two major categories of PAAs platforms and business applications.

The PAAs platform focuses on monitoring alarms for the various service components of the infrastructure and leengine (such as host CPU, memory, IO, disk space, leengine each service process, etc.), which use the company's unified monitoring and alerting mechanism.

Business Application class, that is, running on the leengine of the various lines of business monitoring and alarm, need to be monitored by leengine and alarm, trigger the alarm, will be notified to the owner of each application. We used Heapster to collect monitoring information for containers and various events kubernetes. A heapster is deployed in each cell cluster, and the monitoring data is stored in the influxdb. Set an application to the global corresponding to a kubernetes namespace, so we can well aggregate the application and individual container monitoring data.

such as network traffic monitoring for your application:

Container IP, run time and status:

is for monitoring of individual containers under application:

Now Heapster can't collect the disk IO data of the container, we will increase the monitoring collection of the disk IO later, and we will enrich the other monitoring data (such as request quantity, etc.). About the alarm, we are ready to use kapacitor for user self-help alarm, let user-defined settings for the application CPU, memory, network, IO, container restart, delete and other alarm thresholds. After triggering the alarm, the company will call the unified alarm platform (telephone, mail, SMS three ways) to the relevant personnel to alarm. The default alert person is a member of the owner and master roles for the current app. This feature has been completed with basic research and is scheduled to be launched by the end of March.

Log Collection

According to the company's specific situation, the container log collection is not included in the Leengine scope, all by the line of business to collect their own, and then unified revenue into the company's log system or by the line of business to build their own log storage and retrieval system. We will consider a unified collection of logs later.

Summarize

One-click Solution

Leengine provides code building, image warehousing, application management, and basically a one-click solution for Developers:

Individual line-of-business developers only need to submit code, and the rest will automatically generate a specific image version based on the packaging and build rules, and the test and OPS staff only need to take the corresponding mirrored version for testing and online upgrades, greatly simplifying the devops effort.

Reduce operation and maintenance costs

We recommend that the program in the container in the foreground start, the use of Kubernetes function, the program died, the Kubelet automatically pull up, real-time guarantee the number of container instances online. If you need to debug, business or operations can connect to the container directly via SSH to troubleshoot the problem. Due to the kubernetes powerful container auto-migration function, even if the back-end physical host has an outage or network problem, it will not cause serious problems, unless the backend physical computing resources are not enough, which greatly reduces the number of times the Ops people are called up to handle the problem in the middle of the night.

Compute resources are fully transparent to the business

The underlying computing resources are entirely up to us, and the lines of business don't have to worry about resource issues.

The product of

Leengine at the beginning of the design, to ensure that the minimum reliance on other external resources and services, the entire leengine can be as a solution to the overall external output.

Problems that exist

No matter how the design of any product has its shortcomings, leengine on the line, we also received a lot of feedback and questions.

    1. In accordance with the previous operation and maintenance habits, operation and maintenance personnel in troubleshooting problems, often according to IP to distinguish, and leengine under the application of each upgrade, the container IP will change, which has a certain impact on the traditional operation and maintenance. Our current solution is that if the OPS must go through the IP to troubleshoot, we will let operators log in to Leengine console, see the current application of the container IP list, and then go to one by one. At the same time, some businesses have access restrictions on IP, and in this case, we can only allow these businesses to limit the one IP segment.

    2. Kubernetes 1.2 version, the swap of the container does not control, if the business application has a bug, a large number of memory leaks, often will be the container where the physical machine swap full, and affect the other containers on the physical machine. Our current solution can only be used as an alarm to let users fix the problem as soon as possible.

    3. Since both the Dockerfile and the compilation scripts are required to be placed in the user git code. There will be user git use of non-standard, different branch code merge with each other, resulting in dockerfile and compile script changes, resulting in the build image is not the same as expected, so that the online failure. This kind of problem, we can only require the Business Line specification git usage specification, Leengine code constructs itself already cannot control to this fine granularity.

    4. We're not going to be able to do this at this stage. Automatic elastic expansion (the threshold for the application to set a minimum number of containers and the maximum number of containers, based on CPU, memory, IO, request volume or other values, automatically feel that the container needs to expand, shrink).


Prospect

Docker and Kubernetes also in the rapid development, I believe kubernetes future will have more powerful features, we will also consider the status of the service run to Kubernetes, welcome to explore.

Q&a

Q: Your IP management should have done a lot of development work, right? Can you explain it in detail?

A: Really do a lot of development work, one of the big development volume is about free IP get this piece.
Q: And your grayscale release is using the kubernetes itself rolling update feature? Or did you make it yourself?

A: Grayscale Publishing does not use the kubernetes of the rolling update more able, but with a different RC switching to each other.
Q: Multiple region of the image management, how to achieve mirroring synchronization and update?

A: We do not need to implement mirroring synchronization. As already mentioned in the share, each region has its own mirrored warehouse.
Q: Hello, are you a sophomore network is implemented with open source solutions or self-developed according to OvS development?

A: It is our own implementation, we call the OvS in the network, mainly using the OvS bridge function.
Q: Application cross-room deployment, I understand that is can be deployed across Kubernetes cluster, kubernetes in the scheduling is how to do?

A: The concept of application is above the Kubernetes cluster, where applications can be deployed in multiple kubernetes clusters.
Q: How do you control the mutual calls between the internal services of your company? Container to container or container-nginx-container (equivalent to external)?

A:1. Container, container 2. External service->nginx-> container 3. Container->nginx-> container 4. External services, containers. This is the case, mainly looking at the business scenario.
Q: What CI tools are used to build a service cluster, or do you develop it yourself? What is the role of ETCD?

A: Self-developed, ETCD equivalent to Message Queuing, the control layer received a build request, ETCD in the current cluster under which the build machine, when the build request, the control layer will look for a build machine, and the build machine is located in the ETCD key to send commands, The corresponding build machine performs the corresponding build task after monitoring its own key changes.
The above content is organized according to the February 14, 2017 night group sharing content. Share people Zhang Jie, leengine, head of PAAs platform, Le Vision Cloud Computing Limited. No.: longxingtianxia0619, Tel: 18310797319, graduated from Northeastern University, focusing on technology research of PAAs platform in cloud computing field. 14, we are currently engaged in the architecture design, research and development of the enterprise-class PAAs platform based on Docker and Kubernetes and the landing work of the platform.。 Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.