Directory:
Definition of a highly available cluster
Ii. Metrics for high-availability clusters
III. hierarchical structure of highly available clusters
Iv. classification of high-availability clusters
Five, high-availability cluster common software
Vi. shared storage
Vii. cluster file system and cluster LVM
How high-availability clusters work
Definition of a highly available cluster
High-availability cluster, the English text for the higher Availability Cluster, abbreviated Hacluster, simply said, the cluster (Cluster) is a group of computers, as a whole to provide users with a set of network resources. These individual computer systems are nodes of the cluster.
The emergence of high-availability clusters is intended to make the overall service of the cluster as usable as possible, thereby reducing the loss caused by computer hardware and software error-prone. If a node fails, its redundancy node will take over its responsibilities within a few seconds. As a result, the cluster will never stop for the user.
The main function of high-availability cluster software is to realize the automation of fault checking and business switching. A highly available cluster of only two nodes is also known as a dual-machine hot standby, even with two servers backing up each other. When one server fails, a service task can be performed by another server, which automatically ensures that the system can continue to provide services without human intervention. Dual-Machine hot standby is only a high-availability cluster, high-availability cluster system can support more than two nodes, provide more than the dual-machine hot standby more advanced features, more to meet the needs of users constantly changing.
Ii. metrics for high-availability clusters
HA (High Available), highly available clusters are measured by system reliability (reliability) and maintainability (maintainability). In engineering, the reliability of the system is usually measured with mean time-free (MTTF), and the maintainability of the system is measured by mean time to repair (MTTR). The availability is then defined as: ha=mttf/(mttf+mttr) *100%
Specific HA measurement criteria:
99% downtime of less than 4 days a year
99.9% downtime of less than 10 hours a year
99.99% downtime of less than 1 hours a year
99.999% downtime less than 6 minutes a year
III. hierarchical structure of highly available clusters
Description: A highly available cluster can be divided into three hierarchies, respectively, by the red part of the messaging and membership layer, the blue part of the cluster Resource Manager (CRM) layer, the green part of the local Resource Manager (LRM) and Resource Agent (RA), let's specify (for example),
1. At the bottom of the information and Membership layer (Messaging and membership), Messaging is primarily used to pass heartbeat information between nodes, also known as the heartbeat layer. Transmission of heartbeat information between nodes can be broadcast, multicast, unicast and so on. Membership (membership) layer, the most important function of this layer is the information provided by the messaging layer by the primary node (DC) through cluster Consensus menbership service (CCM or CCS). To produce a complete membership. This layer is mainly to achieve the role of connecting, bearing, the lower layer of information production member diagram passed to the upper layer to notify the working state of each node; The upper layer is specifically implemented to isolate a particular device.
2. The Cluster resource management layer (Cluster Resource Manager) truly implements the tier of the Cluster service. Each node in the layer runs a cluster resource manager (Crm,cluster Resource Manager), which provides core components for high availability, including resource definitions, attributes, and so on. On each node CRM maintains a CIB (cluster repository XML document) and LRM (local resource manager) components. For the CIB, only documents that work on the DC (master node) can be modified, and the other CIB is copied from that document on the DC. For LRM, it is the specific executor who performs the execution and stop of a resource on-premises executed by CRM. When a node fails, it is up to the DC through the PE (Policy engine) and TE (Implementation engine) to decide whether to rob the resource.
3. Resource Agent layer (Resource Agents), cluster resource agent (a script capable of managing the start, stop, and status information of a resource belonging to a cluster resource on this node), the resource agent is divided into: LSB (/etc/init.d/*), OCF (more professional than LSB, more general) , Legacy Heartbeat (V1 version of resource management).
A specific description of the core component (e.g.):
1.CCM component (Cluster Consensus menbership Service): function, link, monitor the underlying heartbeat information, when the heartbeat information is not detected, the entire cluster of votes and convergence status information, and the results are forwarded to the upper layer, To make decisions about what to do, CCM can also generate a topology overview of the state of each node, taking this node as a perspective to ensure that the node can take corresponding actions in special cases.
2.CRMD components (Cluster Resource Manager, cluster resource Manager, also known as pacemaker): to achieve the allocation of resources, each action of resource allocation through the CRM to achieve, is the core of the formation, The CRM on each node maintains a CIB to define resource-specific properties and which resources are defined on the same node.
3.CIB components (cluster information base library, Cluster infonation base): is an XML-formatted configuration file, a configuration file of a clustered resource in memory in an XML format, primarily saved in a file, is resident in memory at work, and needs to be notified to other nodes, Only the CIB on the DC can be modified, and the CIB on the other nodes is on the copy DC. There are methods for configuring the CIB file, based on the command line configuration and the foreground-based graphical interface configuration.
4.LRMD Component (local Resource manager, native Explorer): Used to obtain the state of a local resource, and to implement local resource management, such as when the heartbeat information is detected, to start the local service process, and so on.
5.pengine components:
PE: The policy engine, which defines a set of transfer methods for resource transfers, but only as a strategist who does not personally participate in the process of resource transfer, but rather allows te to execute its own strategy.
TE (Transition Engine): is to execute the strategy made by PE and only run PE and TE on DC.
6.STONITHD components
STONITH (Shoot the other node in the head, "headshot"), this way directly to operate the power switch, when a node fails, the other node if it can detect, will be issued through the Network command, control the fault node power switch, through the temporary power, And power-up means that the failed node is restarted, which requires hardware support.
Stonith Application Case (master-slave server), the primary server at one end of the time due to busy service, no time to respond to heartbeat information, if this time the standby server to take the service resources at once, but this time the main server has not been down, which will lead to resource preemption, In this way, users can access the master-slave server, if only the read operation is OK, if there is write operation, it will lead to file system crashes, so all play, so in the resource preemption, you can use a certain isolation method to achieve, is the standby server to seize resources, Directly to the main server to Stonith, is what we often say "headshot."
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Linux High Availability (HA) cluster basic concepts