Chapter 4. Computing nodes

Source: Internet
Author: User
Tags logstash
ArticleDirectory
    •  

Compute nodes form the resource core of the openstack Compute Cloud, providing the processing, memory, network and storage resources to run instances.

Computing nodes constitute the core of openstack computing cloud resources and provide computing, memory, network, and storage resources for running instances.

CPU choice

The type of CPU in your compute node is a very important choice. First, ensure the CPU supports virtualization by wayVT-xFor Intel chips andAMD-VFor AMD chips.

The computing node CPU type is a very important choice. First, ensure that the CPU supports virtualization technology for Intel chip VT-X and AMD chip AMD-V.

The number of cores that the CPU has also affects the demo. it's common for current CPUs to have up to 12 cores. additionally, if the CPU supports hyper-threading, those 12 cores are doubled to 24 cores. if you purchase a server that supports multiple CPUs, the number of cores is further multiplied.

The number of CPU cores is also very important. Currently, the CPU usually has up to 12 cores. In addition, if the CPU supports hyper-threading, 12 cores mean 24 cores. If you purchase a server that supports multiple CPUs, the number of cores will multiply.

Whether you shocould enable hyper-threading on your CPUs depends upon your use case. we recommend you do performance testing with your local workload with both hyper-threading on and off to determine what is more appropriate in your case.

Whether or not hyper-threading should be enabled on the CPU depends on the usage. We recommend that you enable and disable hyper-threading at the same time to test the performance to determine which option is more suitable for the actual workload.

Hypervisor choice

Openstack compute supports extends hypervisors to various degrees, including KVM, lxc, qemu, UML, VMWare ESX/esxi, xen, powervm, hyper-v.

Openstack Computing supports many different Virtualization Technologies, including KVM, lxc, qemu, UML, VMWare ESX/esxi, xen, powervm, and hyper-v.

Probably the most important factor in your choice of hypervisor is your current usage or experience. Aside from that, there are practical concerns to do with feature parity, documentation, and the level of Community experience.

Maybe Virtual Machine ManagementProgramThe most important factor is your current experience. In addition, special functions, documents andCommunityExperience also needs to be considered.

For example, KVM is the most widely adopted hypervisor in the openstack community. besides KVM, more deployments exist running xen, lxc, VMWare and hyper-V than the others listed-however, each of these are lacking some feature support or the documentation on how to use them with openstack is out of date.

For example, KVM is the most widely used hypervisor in the openstack community. Apart from KVM, xen, lxc, VMWare, and hyper-V are deployed. However, these lack some function support or documentation to make them obsolete in openstack usage.

The best information available to support your choice is found on the hypervisor support matrix (https://wiki.openstack.org/wiki/HypervisorSupportMatrix), and in the Reference Manual (http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_hypervisors.html)

To select the best hypervisor information, see (https://wiki.openstack.org/wiki/HypervisorSupportMatrix) and Reference Manual (http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_hypervisors.html )..

Note

It is also possible to run multiple hypervisors in a single deployment using host aggregates or cells. However, an individual compute node can only run a single hypervisor at a time

You can also use host aggregates or cells to run multiple hypervisor programs in a single deployment. However, a single computing node can only run a single Virtual Machine Management Program ..

Instance Storage Solutions

Off compute node storage-Shared File System

On compute node storage-Shared File System

On compute node storage-non-Shared File System

Issues with Live migration

Choice of File System

As part of the procurement for a compute cluster, you must specify some storage for the disk on which the instantiated instance runs. there are three main approaches to providing this temporary-style storage, and it is important to understand the implications of the choice.

As part of the purchased computing cluster, you must specify the disk storage used for running the instance. There are three main ways to store temporary-style. It is important to understand its meaning and make a choice.

They are:

    • Off compute node storage-Shared File System
    • Non-computing node storage-Shared File System
    • On compute node storage-Shared File System
    • Storage of computing nodes-Shared File System
    • On compute node storage-non-Shared File System
    • Storage on computing nodes-non-Shared File System

In general, the questions you shoshould be asking when selecting the storage are as follows:

Generally, you should select the appropriate storage type by answering the following questions:

    • What is the platter count you can achieve?
    • How many disks can be counted?
    • Do more spindles result in better I/O despite network access?
    • What is the largest I/O?
    • Which one results in the best cost-performance scenario you're aiming?
    • Which of the following is the best cost-performance choice for the target scenario?
    • How do you manage the storage operationally?
    • How to manage storage operations?

Off compute node storage-Shared File System

Extends operators use separate compute and storage hosts. compute services and storage services have different requirements, compute hosts typically require more CPU and ram than storage hosts. therefore, for a fixed budget, it makes sense to have different deployments for your compute nodes and your storage nodes with compute nodes invested in CPU and ram, and storage nodes invested in Block Storage.

Many carriers use independent computing and storage hosts. Computing services and storage services have different requirements. Compared with storage hosts, computing hosts usually require more CPU and ram resources. Therefore, it makes sense that the computing node and storage node have different configurations in terms of CPU, ram, and block storage costs for the determined budget.

Also, if you use separate compute and storage hosts then you can treat your compute hosts as "stateless ". this simplifies maintenance for the compute hosts. as long as you don't have any instances currently running on a compute host, you can take it offline or wipe it completely without having any effect on the rest of your cloud.

In addition, if you use a separate computing and storage host, you can "stateless" the computing host. This simplifies the maintenance of computing hosts. As long as no instance is running on the computing host, you can remove it offline without affecting other hosts in the cloud.

However, if you are more restricted in the number of physical hosts you have available for creating your cloud and you want to be able to dedicate as your of your hosts as possible to running instances, it makes sense to run compute and storage on the same machines.

However, if you want the host to create and run as many instances as possible, it makes sense to store computing and storage on the same machine.

In this option, the disks storing the running instances are hosted in servers outside of the compute nodes. There are also several advantages to this approach:

In this option, the disk storage of the running instance is hosted on a server other than the computing node. This method also has some advantages:

    • If a compute node fails, instances are usually easily recoverable.
    • If a compute node fails, the instance is usually easy to recover.
    • Running a dedicated storage system can be operationally simpler.
    • It is easier to run a dedicated storage system.
    • Being able to scale to any number of spindles.
    • Can be expanded to any number of spindle.
    • It may be possible to share the external storage for other purposes.
    • It may be that external storage can be shared for other purposes.

The main downsides to this approach are:

The main disadvantages of this method are:

    • Depending on design, heavy I/O usage from some instances can affect unrelated instances.
    • This design affects the use of high I/O instances in some cases.
    • Use of the network can decrease performance.
    • The network used reduces the performance.

On compute node storage-Shared File System

In this option, each Nova-compute node is specified with a significant amount of disks, but a distributed file system ties the disks from each compute node into a single mount. the main advantage of this option is that it scales to external storage when you require additional storage.

In this option, each Nova-compute node has a large number of disks, and the distributed file system connects the disks of each computing node into a unified storage system. The main advantage of this option is that it is easy to expand when you need additional storage space.

However, this option has several downsides:

However, this option has the following Disadvantages:

    • Running a distributed file system can make you lose your Data Locality compared with non-shared storage.
    • Running a distributed file system will result in your local Data Locality loss compared with non-shared storage.
    • Recovery of instances is complicated by depending on multiple hosts.
    • Instance recovery relies on multiple hosts, which is very complex.
    • The chassis size of the compute node can limit the number of spindles able to be used in a compute node.
    • The chassis size computing node may limit the number of nodes that can be used for computing.
    • Use of the network can decrease performance.
    • The network used reduces the performance.

On compute node storage-non-Shared File System

In this option, each Nova-compute node is specified with enough disks to store the instances it hosts. There are two main reasons why this is a good idea:

In this option, each Nova-compute node has enough disks to store the instances it carries. This is a good idea for two reasons:

    • Heavy I/O usage on one compute node does not affect instances on other compute nodes.
    • A large number of I/O instances on one computing node do not affect instances on other computing nodes.
    • Direct I/O access can increase performance.
    • Direct I/O access can improve performance.

This has several downsides:

This has the following Disadvantages:

    • If a compute node fails, the instances running on that node are lost.
    • If a compute node fails, all running instances on the node will be lost.
    • The chassis size of the compute node can limit the number of spindles able to be used in a compute node.
    • The size of the computing node chassis is limited to the number of disks that can be used for computing nodes.
    • Migrations of instances from one node to another are more complicated, and rely on features which may not continue to be developed.
    • Migration from one node to another is more complex, and related functions may not be further developed.
    • If additional storage is required, this option does not to scale.
    • This option cannot be expanded if additional storage space is required.

Issues with Live migration

We consider Live migration an integral part of the operations of the cloud. this feature provides the ability to seamlessly move instances from one physical host to another, a necessity for refreshing upgrades that require reboots of the compute hosts, but only works well with shared storage.

We believe that real-time (hot) migration is an integral part of cloud services. It provides the ability to seamlessly migrate instances from one physical host to another physical host. This is necessary for upgrading and restarting the computing host, however, this is only effective when shared storage is used.

Theoretically Live migration can be done with Non-shared storage, using a feature knownKVM live block migration. However, this is a little-known feature in openstack, with limited testing when compared to Live migration, and is slated for deprecation in KVM upstream.

Theoretically, non-shared storage can be migrated in real time. The function used is called KVM.Live BlockMigration. However, this is a little-known feature of openstack. Compared with real-time migration, it has limited testing time, And KVM does not support this.

Choice of File System

If you want to support shared storage Live migration, you'll need to configure a distributed file system.

Possible options include:

If you want to use real-time migration that supports shared storage, You need to configure a distributed file system. Possible options include:

    • NFS (default for Linux)
    • Glusterfs
    • Moosefs
    • Lustre

We 've seen deployments with all, and recommend you choose the one you are most familiar with operating.

We have seen all the deployments. We recommend that you select one that you are most familiar.

Overcommitting

Openstack allows you to overcommit CPU and ram on Compute nodes. this allows you to increase the number of instances you can have running on your cloud, at the cost of capacity cing the performance of the instances. openstack compute uses the following ratios by default:

Openstack allows excessive use of CPU and ram on computing nodes. This allows you to run more instances on the cloud at the same cost to reduce instance performance. Openstack computing uses the following by default:

    • CPU allocation ratio: 16
    • Ram allocation ratio: 1.5

The default CPU allocation ratio of 16 means that the scheduler allocates up to 16 virtual cores on a node per physical cores. for example, if a physical node has 12 cores, and each virtual machine instance uses 4 virtual cores, The schedalloallocates up to 192 virtual cores to instances (such as, 48 instances, in the case where each instance has 4 virtual cores ).

By default, the CPU distribution ratio is 16, which means that up to 16 virtual kernels can be allocated on each node of the physical core. For example, if a physical node has 12 cores and each virtual machine instance uses four virtual kernels, 192 virtual kernel instances (such as 48 instances) can be scheduled to be allocated, if each instance has four virtual kernels ).

Similarly, the default Ram allocation ratio of 1.5 means that the scheduler allocates instances to a physical node as long as the total amount of Ram associated with the instances is less than 1.5 times the amount of Ram available on the physical node.

Similarly, the default Ram allocation ratio is 1.5. The scheduler of the device is allocated to the physical node as long as the total amount of Ram associated with the instance is less than 1.5 times the number of available Ram instances on the physical node.

For example, if a physical node has 48 gb of RAM, The schedalloallocates instances to that node until the sum of the ram associated with the instances reaches 72 GB (such as nine instances, in the case where each instance has 8 GB of RAM ).

For example, if a physical node has 48 gb ram, the scheduler allocates it to the instance of the node until the sum of the instance-related ram reaches 72 GB (such as 9 instances, each instance has 8 gb ram ).

You must select the appropriate CPU and ram allocation ratio for your particle use case.

You must select an appropriate CPU and ram allocation ratio for specific usage.

Logging

Logging is detailed more fully in the section called "logging". However it is an important design consideration to take into account before commencing operations of your cloud.

Openstack produces a great deal of useful logging information, however, in order for it to be useful for operations purposes you shoshould consider having a central logging server to send logs, and a log parsing/analysis system (such as logstash ).

More adequate log descriptions are detailed in the log section. However, considering cloud operations, this design is very important. Openstack generates a large amount of useful log information. However, to meet business needs and make full use of it, you should consider sending logs to servers with central log parsing/analysis systems, (such as logstash ).

Networking

Networking in openstack is a complex, multi-faceted challenge. See chapter 6,Network Design.

The openstack network is a complex and multidimensional challenge. See Chapter 2 network design.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.