Heartbeat v2 uses resource Constraints (Constraints) to create a high-availability cluster

Source: Internet
Author: User

Heartbeat v2 uses resource Constraints (Constraints) to create a high-availability cluster
[Description of resource constraints and resource stickiness]

Resource Classes)
Primitive (native): a primary resource that can only run on one node, such as DC.
Clone: clone resources. Master resources are cloned in N copies, for example, STONITH.
Group: group resources. Resources are classified as vip httpd filesystem.
Master/slave: Clone class resource drbd
A. resource stickiness (resource and node tendency, differentiated by server performance), that is to say, the degree of resource dependency on a node, defined by score (score), the higher the score of a node, resources tend to consider the following when specifying resource stickiness values for this node:
1. Value 0: This is the default option. The most suitable location for storing resources in the system. This means that resources are transferred only when nodes with better load capabilities or poor load capabilities become available. This option serves almost the same purpose as automatic fault recovery, but resources may be transferred to non-active nodes.
2. value greater than 0: the resource is more willing to stay in the current position, but it will be moved if a more suitable node is available. The higher the value, the more willing the resource to stay in the current position.
3. value less than 0: the resource is more willing to move from the current location. The higher the absolute value, the more willing the resource to leave the current position.
4. The value is INFINITY. If the resource is forcibly transferred because the node is not suitable for running resources (node shutdown, node standby, reaching migration-threshold, or configuration change), the resource is always in the current position. This option serves almost the same purpose as disabling automatic failover.
5. The value is-INFINITY: the resource is always moved from the current location.
B. resource constraints (tendency between resources)
(1). Location: resource-to-node disposition, defined by score (score) (determined by server performance)
Positive: tends to this node
Negative value: tends to escape from this node
Note: It can usually be used together with resource stickiness to determine whether a resource is on a node. The example is as follows. This resource will certainly be on node2.
Node1 resource stickiness-> 100 position constraints-> 200
Node2 resource stickiness-> 100 location constraints-> inf (positive infinity)
(2). Order: defines the Order in which resources are started or closed.
Vip, ipvs
Ipvs-> vip
Note: In the lvs high-availability cluster, we can define the Starting sequence of vip and lvs.
(3). Coloation: determines whether resources can run on the same node and the dependency between resources, which is defined by score.
Positive Value: can be together
Negative value: cannot be together
Note: when creating a high-availability web cluster, we define whether httpd and filesystem (NFS) are running on the same node.

1. Create a high-availability cluster
[1.1] Create Group resources and then create three independent resources for them to run independently.
# Webip

# Click Add to save and exit. The webip independent resource is created successfully.

# Webstore Resource Creation

# Click Add to save and exit. The webstore independent resource is created successfully.

# Independent httpd service Resource Creation

# Click Add to save and exit. httpd independent resource creation is complete.

[1.2] Start all resources

# Note:
Three native resources were added, but no group resources were added, meaning that the vip, httpd, and nfs resources were not put in the same group, but three native resources were added.
It can be found that these three resources are not on the same node. When these three resources are added, when the webip resource is started, it runs on node node2 and webstore resources are started, run on node1 and then start the httpd node, and then run on node2. we can see that when adding group resources, each resource we add will be evenly allocated to each node for running;

# Verify that http is already running on the node, and node1 does mount the nfs Sharing Service

[1.3] defines the Coloation constraint to run http and webstore resources on the same node.

# Note:
Http and nfs must run on the same node, and nfs and webip must also run on the same node.
* Httpd must depend on Webstore for running;
* If the Webstore cannot run on any solution node, Httpd will not run;
* If the Http service cannot be started, Webstore startup will not be affected;


# After the Coloation definition is complete, The Webstore and Httpd services have run to node1.

[1.4] defines the Coloation constraint so that the webstore and webip resources run on the same node.

# Note:
Http and nfs must run on the same node, and nfs and webip must also run on the same node.
* The Webstore resource must depend on the Webip to run.
* If the webip cannot run on a node, The webstore will not be mounted.
* If the webstore cannot be attached, the webip address will not be affected.

# After the Coloation definition is complete, the Webstore, Httpd, and webstore services have been fully run to node2.

[1.5] Order constraints, fixed the Order in which webstore and httpd resources are successively started

# Note: webstore resources must be started before httpd Resources
* Start the httpd service and then start webstore.
* If Webstore cannot be started, do not start httpd.
* Disable the webstore service and stop the httpd service.
* If the httpd service cannot be started, The webstore cannot be stopped.

# The first sequence constraint has been configured.

[1.6] Order constraints, fixed webip and httpd resource initiation Order

# Note: webip resources must be started before httpd Resources
* Start webip resources and then httpd Resources
* If webip resources cannot be started, httpd resources cannot be started.
* Disable httpd resources before closing webip Resources
* If the httpd resource cannot be disabled, the webip address will not be affected.


[1.7] switch Test
# Run it on node2 now and change node2 to a standby node. Observe the effect:


# All cluster resource services have been switched to the node and run normally;

[1.8] Location constraint, so that it tends to stay in node2

# Click Add Expression in the lower right corner to Add parameters (webip is more inclined to node2)

[1.8] summary of all resources and constraints

# As you can see, after the constraints are established, all the resources are on node node2.














Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.