Iv. Introduction to OpenStack Components Parsing (Advanced)

Source: Internet
Author: User

Introduction to OpenStack Components Parsing (Advanced)

Learning Goals:

    • Mastering the architecture and functionality of more components

The contents of this note are:

    • Ceilmeter Component Parsing
    • Heat Component Parsing
    • Trove Component Parsing
    • Sahara Component Parsing
    • Ironic component parsing
1. Ceilometer Component Parsing

Also known as OpenStack Telemetry ( remote measurement data collection ), OpenStack is a metering project. The main purpose of Ceilometer is to provide data support for billing . OpenStack itself does not provide billing functions, ceilometer will give people to do two times when the implementation of the billing function to bring great convenience.

[What is the difference between the cost and the monitoring of the metering data?] ]

The Focus is not the same. Seilometer is the measurement and billing-related data, these data as the message in the network transmission, will be signed, from the point of view of information security, the greatest use of signature is non- repudiation , it is important to use the billing application. Of course, Ceilometer now also adds additional features that help Ops to implement more monitoring functions, gradually reducing or even eliminating the effort to re-develop and deploy a monitoring system, reducing the complexity of the system as a whole.

[Three points of Ceilometer:]

    • Source of raw data
    • Storage of data
    • How to provide it to a third party system (e.g.. Two development of billing system)

[The source of raw data is mainly three ways:]

    1. Collect messages from individual components by MQP message middleware
    2. By calling OpenStack's API for each component to get data through some of Ceilometer's agents, component includes Swift, Cinder, Neutron,trove,sahara,ironic
    3. If you want to efficiently capture data from Nova-related data or with OpenStack's computing services, obtain information about the virtual machine by running Ceilometer's polling agent on each compute node

[Storage of data:]

Ceilometer storage is also dependent on third-party backend to implement, the default backend database is MongoDB, is a key-value database, of course, now also support other databases including HBase, MySQL, preferred Mangodb.

[Third-party system:]

The most important way is the third-party system, by calling the Ceilometer API to obtain metering data , set alarm conditions and pre-value, monitoring alarm, further to realize the billing and monitoring functions, the specific use of the time involved in Ceilometer how to set, Each data is obtained by invoking what API, how to set the pre-value of alarm, etc.

2. Heat Component Parsing

Also known as OpenStack Orchestration,heat is the component that provides Orchestration services within OpenStack. The various modules and resources of an IT system are organized and dispatched to form a complete set of organic systems that can realize some business functions.

There's a cloudformation thing in AWS, Heat and this is like, according to the user-written template, or script, the various resources in OpenStack to instantiate and organize, to form a complete application, these scripts in the Heat called What Template,template generates is called a Stack. The Template will write clearly what resources are needed to create a Stack, and then what are the interrelationships between these resources, including the virtual machines, volumes (EVS), users, IP addresses, and so on, which we say are the resources that this place describes.

The main task of Heat is to take charge of the Stack's lifecycle: Create, update, and destroy.

[Composition of Template:]

    • Description Notes
    • Parameters parameters
    • Resources Resource
    • Outputs output

Part:

[More complex structure:]

Create a Wordpress site, create a three-tier architecture site ...

Ps:heat can be used with ceilometer to achieve auto scaling and compatibility with cloudformation templates

3. Trove Component Parsing

[Trove's function:]

According to the user's request to create a database containing the virtual machine (VM Instance), according to user parameters such as database type, user name password, etc., the database is installed configuration.

[Establishment of the database:]

After the virtual machine is created, it is installed by Trove, or the user can choose to do the Trove image in advance (the database is already installed in the image). The latter will be more efficient.

[Trove supported databases are:]

    • relational database MySQL
    • NoSQL database MongoDB, Cassandra

[Four components:]

    • Trove API provides RESTful APIs, stateless, can scale horizontally, can do load balancing, can be connected to more user requests.
    • Trove TaskManager completes the execution of specific management commands, such as: creating instances, destroying instances, managing the life cycle of instances, manipulating databases, and so on. The main way to do this is to listen to the RabbitMQ middleware, send MQP to call requests, and then implement these requests.
    • Trove-conductor is responsible for interacting with the database, similar to Nova conductor, running on host.
    • The Guest agent runs in the virtual machine to run the database service, each kind of database has its own corresponding agent, let Trove to a more database to write their own corresponding database agent, is also a common way OpenStack.

[Trove face challenges:]

Automatic configuration of database HA is not supported

4. Sahara Component Parsing

[Sahara's function:]

Quickly create Hadoop clusters on OpenStack and take advantage of the idle computing power on Iaas for offline processing of large data (designed Sahara's original intention)

[Sahara's architecture:]

Note several places in the diagram, Sahara Dashboard, Plugins, Swift (store persisted data), RESTful API

[Sahara has two modes of use:]

    1. IaaS Mode/Management mode
    2. PaaS, Daaas mode/user mode/EDP mode (daaas,data-analysis-a-service mode)

[There are several concepts to figure out about the IaaS model:]

(1) Node: Is the machine used to run the Hadoop cluster . Refers to the node of the Hadoop cluster provition by Sahara, which is often a virtual machine or some physical node.

(2) Node group: is divided according to the type of node

(3) Node Group template: Define a node group. An example of a node group template configuration file is as follows

{"name":"test-master-templ","flavor_id":"2","plugin_name":"vanilla",  # 声明用的是Hadoop哪个发行版的版本名"hadoop_version":"1.2.1",  # 版本号"node_processes":["jobtracker","namenode"] # 需要运行的哪些进程}

(4) Cluster: A complete Hadoop cluster, how is the cluster defined in Sahara? It is defined by the cluster template. The following are examples of cluster templates:

{"name":"demo-cluster-template","plugin_name":"vanilla","hadoop_version":"1.2.1","node_groups":[    {        "name":"master",     # 这是节点组的name        "node_group_template_id":"b1ac3f04-c67f-445f-b06c-fb722736ccc6",     # 引用节点组模板        "count":1    # 这个集群里面,这个节点组包括了几个节点数量    }    {        "name":"workers",        "node_group_template_id":"dbc6147e-4020-4695-8b5d-04f2efa978c5",        "count":2    }]}

We often called the node that runs Jobtracker and Namenode in Hadoop as the master node, and the node that runs task tracker and data node is called the worker node. So the example above is made up of a master node and two worker nodes.

[About EDP mode:]

    • Premise: At least one Hadoop cluster can be created in OpenStack or multiple;
    • Preparation: Uploading the data to be processed, we need to be clear about where the data we need to process is placed, write the job and upload it, need to give Sahara a ternary, is called the EDP API Its parameters (1. Which cluster is used to process this heap of data; 2. Where is the Yuan of this data, where the data to be processed is placed, and then where it is stored, and then where to go; 3. What is the Job we want to run?
5. Ironic component parsing

Is there a performance issue when it comes to storage IO in a virtualized environment? Indeed, although the virtualization Hypervisor has been optimized, the computational performance loss is already very low, but when it comes to IO, especially the virtual machine will use the Cinder backend to do Volume, the problem is indeed there.

OpenStack implements the management of physical machinery through ironic, and uses physical machines to implement cloud services .

How to launch a baremetal Server workflow

We can see that this physical node on Nova Compute called the ironic API, actually launch the physical machine node or the physical node that actually provides compute resources is managed remotely by ironic conductor, through two things, one called PXE, one Called IPMI to remotely administer a physical machine, as shown in

So why is there such a logic? Because ironic actually evolved from a part of Nova.

So ironic calls the ironic API through the Nova Compute component and then manages a remote compute node that actually provides compute resources through ironic conductor. (Ironic was previously a Driver of Nova)

How to actually use a physical machine to cloud the cloud, and using OpenStack to manage a stack of physical machines to implement the clouds is a complex matter.

网络 MOOC 学习笔记 From 高校帮 《OpenStack 入门 @讲师 李明宇》By 胡飞at 2016/4/3 12:48:23

Iv. Introduction to OpenStack Components Parsing (Advanced)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.