Cloud Foundry Technology panorama and core component analysis

Source: Internet
Author: User
Tags ruby on rails

Original link: http://www.programmer.com.cn/14472/

After more than a year of development, Cloud Foundry architecture design and implementation has many improvements and optimizations. To help you understand and delve into the first open source PAAs platform,--cloud Foundry, Programmer magazine, in conjunction with the Cloud Foundry community, opened the deep Cloud Foundry column, designed to structure, core module functions, Source code analysis and other angles to comprehensively analyze cloud Foundry, and will combine the typical cases of various industries to explain the performance of cloud Foudry in the specific application scenarios.

Architecture Design and core components

In general, Cloud Foundry is shown in Architecture 1.

Figure 1 Cloud Foundry Frame composition

After more than a year of development, Cloud Foundry's components have increased a lot. But the core components do not change, and the added components are refinement and specialization based on the original architecture. The Stager component solves the problem that the packaging (Stage) process requires a large number of files and takes a long time to operate, so it acts as a standalone process, making the packaging work asynchronous without blocking the cloud Controller that is the core component.

The following is a description of the Cloud foundry core components.

Router. As the name implies, the router component routes all incoming requests in cloud foundry. There are two main types of requests for entry into router.

    • The first class is from the VMC client or STS, issued by the Cloud Foundry user, called the management request. Such requests are routed to the cloud controller component for processing.
    • The second class is a request for access to the deployed app. This part of the request is routed to the app execution, which is the DEA component. Simply put, all requests to the Cloud foundry system go through the router component. The router component is extensible and is processed by multiple router in a common request. But how to do load balancing on router does not belong to the Cloud Foundry implementation range. Cloud Foundry only guarantees that all router can handle any request, while the administrator can load-balance the DNS, deploy dedicated hardware, or simply, get a nginx load balancer.

In the first version, router work is done by ROUTER.RB, and all requests must be forwarded through Ruby code processing. This design is simple and straightforward, but it is easy to cause performance problems, the new version has made the following improvements, 2 (the first version on the left, and the new version on the right).

Figure 2 Router work process (new vs. old version)

    • Using Nginx's LUA extensions, add URL queries and statistical logic to LUA.
    • If LUA does not know which dea the current URL should be routed to, it will send a query request to ROUTER_ULS_SERVER.RB (that is, "Upstream Locator SVC" in Figure 2).
    • ROUTER_ULS_SERVER.RB is a simple Sinatra application that stores all URLs that correspond to the DEA Ip:port. In addition, it manages the requested session data.

In this way, a large number of business requests after LUA query and save location, are directly forwarded by Nginx, no longer through router, performance and stability are greatly improved.

There is a difficulty in the design of router: We know that HTTP requests are context-sensitive, so how do you ensure that the context of the request is complete? Simply put, how do you ensure that a context-sensitive request can find the same DEA process every time? Cloud Foundry is a support session, and when router discovers a user request with cookie information, it hides an application instance ID in the cookie. When there is a new request, router the last application instance by parsing the cookie and forwards it to the same DEA. This information, similar to the above query, will first exist in the upstream Locator Svc, and will be saved inside nginx to improve efficiency when Lua knows it.

DEA (Droplet execution Agency). The first thing to explain is what is called droplet. In cloud Foundry, droplet refers to the submitted source code and the Cloud Foundry configuration of the operating environment (such as the Java Web is a tomcat), plus some control scripts, such as Start/stop, all packaged in a tar file. The Staging app refers to the process of making droplet and then storing it. Cloud Foundry will save this droplet until an app is launched, and a server that has a DEA module deployed will take the droplet copy to run. Therefore, if you extend the app to 10 instances (instance), the droplet will be copied 10 copies for 10 DEA servers to run.

Figure 3 is the architecture diagram for the DEA module (the first version on the left and the new on the right).

Figure 3 DEA Module architecture diagram (new vs. old version)

When Cloud Foundry was launched, user-deployed applications could be unblocked in the intranet, running full of CPUs, taking up memory, and writing full disks. As a result, Cloud Foundry developed the warden to run the container with this program to solve the problem. This container provides an isolated environment where droplet can only get limited CPU, memory, disk access, and network permissions.

The implementation of Warden on Linux is to differentiate the resources of the Linux kernel into several namespace, and the underlying mechanism is cgroup. This design is better than virtual machine performance, faster start-up, and can also get enough security.

The operation principle of the DEA has not changed radically: The Cloud controller module sends basic app management requests such as Start/stop to DEA,DEA.RB to receive these requests, and then downloads the appropriate droplet from Blobstore. In front of the droplet is a tar package with running scripts and a running environment, the DEA just needs to extract it and execute the start script inside to make the app run, and the app can be accessed. In other words, one of the ports on this server is already on standby and the app can receive and return the correct information as soon as the request comes in from this port.

Next, dea.rb to do some of the following work.

Tell this message to the router module (previously, all requests to cloud foundry are processed and forwarded by the router module, including user requests for access to the app.) Once an app is running, it needs to tell router that it can move the appropriate request in accordance with the principles of load balancing, so that instances of the app will work.

    • Some statistical work. For example, to the user to deploy a new app to tell the cloud controller, for quota control and so on.
    • Tell the Health Manager module about the running information and report the app's instance running in real time.

In addition, the DEA is responsible for part of the droplet query work. For example, if a user wants to query the log information of an app through a cloud controller, the DEA needs to take the log back from the droplet.

Cloud Controller. Management module for Cloud Foundry. Simply put, it is the server side that interacts with VMC and STS, which receives instructions to send messages to each module quickly, managing the entire cloud running, equivalent to the brains of cloud foundry.

For example, deploy an app to cloud foundry. After you enter the Push command, VMC begins to work. After completing a round of user authentication, checking to see if the number of apps deployed exceeds a predetermined amount, and asking a bunch of related apps, you need to send 4 instructions.

    • Send a post to "apps" to create an app;
    • Send a put to "apps/:name/application", upload the app;
    • Send a get to "apps/:name/", get the app status, check whether it has been started;
    • If it does not start, send a put to "apps/:name/" so that it starts.

The first version of the cloud controller was based on the Ruby on Rails, and the new cloud controller was rewritten with Sinatra, and part of the work was independently made into components, making the cloud controller lighter. Another important improvement is that the first version of the droplet is shared via NFS, which brings security, performance and other issues, the new version of the use of their own developed blobstore storage droplet.

As Cloud foundry matures, the Rights Management feature is gradually perfected in the new version. Based on the original user model, the concept of organization and user space is added, and the management model is refined. The authentication of the user model is implemented by the UAA module. In an enterprise environment, if a private cloud is built with cloud-foundry open source code, it can be integrated with the enterprise's existing authentication system, such as LDAP, CAS, and so on. Permission control is implemented by the ACM module. Figure 4 shows the process by which a user accesses a cloud controller API.

Figure 4 The process of a user accessing a cloud Controller API

Health Manager. What it does is not complicated, simply put, is to obtain operational information from each DEA, and then carry out statistical analysis, reporting, alarm and so on.

Services. The service should belong to the third tier of the PAAs. Cloud Foundry the service module into a standalone, plug-in module that makes it easy for third parties to integrate their services into cloud foundry services. The following two related sub-projects are worth noting on GitHub.

    • Vcap-services-base: As the name implies, it includes the framework of the Cloud Foundry service and the core class library. If you develop a custom service, you need to refer to the class inside.
    • Vcap-services: Currently supported by Cloud Foundry, including services contributed by the official and most third parties. The root file directory for this project is based on the service name and can be selected for your own interest.

This shows that the service module is very convenient to provide custom services for third parties. From the architecture, the Cloud Foundry Service section uses a template method design pattern that can be overridden by the hook method to implement its own services. If special logic is not required, you can use the default method.

In reality, there are various reasons why some system services are difficult or unwilling to migrate to the cloud, and this cloud Foundry introduces Service Broker modules.

Service Broker enables applications deployed on cloud foundry to access local services. Service Broker uses the following methods.

    • Prepare the service to be accessed. For PostgreSQL, for example, configure the program and firewall so that it can be accessed via a URI similar to Postgres://xyzhr:[email protected]:5432/xyz_hr_db.
    • Register the above URI to service Broker.

Services exposed using Service Broker are no different from the system services used with cloud foundry, and the URI of the Access service in the service being accessed is passed to the app via environment variables. The app accesses exposed services through a URI, which does not have to pass through service Broker. This procedure is shown in 5, similar to the use of system services, and is not covered here.

Figure 5 Procedure for using services exposed by Service Broker

NATS (Message bus). The architecture of Cloud Foundry is based on message publishing and subscription. Contact each module is a component called Nats. Nats is an event-driven, lightweight messaging system developed by Cloud Foundry. It is based on the Eventmachine implementation. The first version of the Cloud Foundry was criticized as a problem is that the Nats server is a single node, people are not very comfortable. The new Nats can support multi-server nodes, NATS servers through thin to do communication. Nats's GitHub Open source address is: Https://github.com/derekcollison/nats. The code is not much, but the design is very subtle, it is recommended to study its source code.

Cloud Foundry offers a variety of outstanding features from the messaging architecture. Each module on each server publishes messages to the corresponding subject according to the current behavior, and listens to multiple topics as needed, communicating with each other as messages.

It can be said that the core of cloud Foundry is a set of message system, if you want to understand the ins and outs of cloud foundry, it is a good way to track the complex message mechanism inside it. For the simplest example, a server with a DEA component is added to the Cloud Foundry cluster to enhance the computing power of the clouds. It first needs to indicate that it is ready to provide the service, Cloud controller can deploy the app to it here, router can also send the relevant request to it processing, health manger can be scheduled for its physical examination, etc., it will publish a message to the topic "Dea.start":

Nats.publish (' Dea.start ', @hello_message_json)

@hello_message_json including the DEA uuid, IP, port, version information and so on. Cloud Controller, Router, health Manger, and other modules listen to this topic, get notified, and work on their own.

The best way to understand cloud foundry is to select an action, such as deploying an app, creating a service, and tracking the module to see how it is handled, with messages as clues. This allows you to observe the workflow of the entire cloud foundry. The 2nd article in this column will be devoted to understanding the principles of cloud foundry with Nats as the main line, and there is not much to be described here.

Summarize

Over the past year, many changes have taken place in cloud foundry to see how the Cloud Foundry community is active. I very much hope that this article has made the principle of cloud foundry clear enough, but please do not use this as a reference manual, with the efforts of the VMware China developer Relations team, the Cloud foundry documentation is quite complete, It is strongly recommended as a reference (URL: www.cloudfoundry.cn).

Cloud Foundry Technology panorama and core component analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.