Linux Cluster Setup__linux

Source: Internet
Author: User
Tags chmod failover mkdir nets switches uuid ssh hosting

Http://oboguev.net/kernel-etc/linux-cluster-setup.html

Helpful reading:

Https://alteeve.ca/w/AN!Cluster_Tutorial_2
Https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial_-_Archive

RedHat 7 Documentation

RHEL7 High Availability Add-On administration

Highavailability Add-On Reference Reference

Globalfile System 2

LoadBalancer Administration

http://clusterlabs.org
Http://clusterlabs.org/quickstart-redhat.html
Http://clusterlabs.org/quickstart-ubuntu.html
Http://clusterlabs.org/quickstart-suse.html
Http://clusterlabs.org/doc
Http://clusterlabs.org/faq.html

Clustersfrom Scratch
Pacemakerexplained (Reference)
Susedocumentation

"Prolinux High Availability Clustering" (Kindle)

"Centoshigh Availability" (Kindle)

http://corosync.org
Https://alteeve.ca/w/Corosync
Google:corosynctotem
Google:openais


Older (cman-based) clusters included:

/etc/cluster/cluster.conf=> corosync.conf + cib.xml
System-config-cluster or conga (Luci + Ricci) configuration ui=> replaced by (still deficient) Pcs-gui on port 2224
Rgmanager => Pacemaker
CCS => PCs




Setup Corosync/pacemaker Cluster named VC composed of three nodes (VC1, VC2, VC3)

Based on Fedora Server 22.

Warning:a bug in Virt-manager Clone command could destroy Apparmorprofile both on source and target virtual machines.
Replicate virtual machines manually, or at least backup Sourcemachine profile (located in/etc/apparmor.d/libvirt).


Networkset-up:

It is desirable to set up separate network cards for general Internettraffic, SAN Traffic and cluster backchannel.
Ideally,interfaces should be link-aggregated (bonded or teamed) pairs, witheachlink into a pair connected to separate stack Ed switches.
Backchannel/cluster network can is two sub-nets (on separate interfaces) Withcorosync redundant ring configured through th Em however bonded interface is easier to set up, moreresilient to failures, and allows traffic for other components -safe too It is also possible to bind multiple addresses to thebonded interface and set up Corosync redundant ring Amont t Hem-but Itdoes not make sense
SAN network can is two sub-nets (on separate interfaces) with iscsimulti-pathing configured between them however can also Be bonded:either utilizing one sub-netfor all sans traffic (with disks dual-ported between ISCSI Portalswithin the same s Ub-net, but different addresses), or binding muiltiplesub-nets to the Bonded interface (with disks dual-ported between ISC Siportals located on different sub-nets)
General network better is bonded, so each node can is convenientlyaccessed by a single IP address however load balancer CA n instead be configured to usemultiple addresses for a node bonded interfaces are slightly preferable to teamed interfaces forclustering, as all link management to bonded interfaces happens inthe kernel and does not involve User-land S (unlike in the teamedinterfaces set-up).

It makes sense to use Dual-port network cards and Scattergeneral/san/cluster traffic ports between-them, so a card failure Doesnot bring down the whole network category.

IF interfaces are bonded or teamed (rather than configured for separatesub-nets), switches should, allow cross-traffic, i.e . be eitherstackable (preferably) or have isl/ist (Inter-Switch link/trunking, AKASMLT/DSMLT/R-SMLT). 802.1aq (shortest Path bridging) support may bedesirable. Here you are.

Notethat IPMI (Amt/sol) interface cannot is included in the "Bond or Teamwithout loosing its IPMI capabillity, since it CEA Ses to is indviduallyaddressable (having own P address).
Thus if IPMI is to being used for fencing or remote management, the IPMI port is to being left alone.

For a real physical NIC, can identify port with

Ethtool--identify Ethx [a] => flashes LED times

When hosting cluster nodes in KVM, create KVM Macvtap interfaces (Virtio/bridge).

Bond interfaces:

About bonding
RHEL7 Documentation
More about bonding
Notethat bonded/teamed interfaces In most setups does not provide increaseddata speed or increased bandwidth from one node T o another. Theyprovide a failover and may provide the increased aggregate bandwidth for concurrent connections to multiple target host S (but not to the same target host). However, further down below.
Use Network manager GUI:

"+"-> Select Bond
Add->create->ethernet->select eth0
Add->create->ethernet->select eth1
Link monitoring:mii => Check Media State
ARP => use ARP to "ping" specified IP addresses (comma-separated),
At least one responds-> link OK (canalso Configure to require all to respond)
Mode = 802.3ad => If linked to a real switch (802.3ad-compliant peer)
Adaptive load Balancing => otherwise (if connected directly or via a hub, not a switch)
Monitoring frequency = ms

Or Create files:

/etc/sysconfig/network-scripts/ifcfg-bond0

Device=bond0
Name=bond0
Type=bond
Onboot=yes
Bonding_master=yes
Bootproto=none
#DEFROUTE =yes
#IPV4_FAILURE_FATAL =no
#UUID =9D1C6D47-2246-4C74-9C62-ADF260D3FCFC
#BONDING_OPTS = "miimon=100 updelay=0 downdelay=0 mode=balance-rr"
bonding_opts= "miimon=100 updelay=0 downdelay=0 mode=balance-alb"
ipaddr=223.100.0.10
Prefix=24
#IPV6INIT =yes
#IPV6_AUTOCONF =yes
#IPV6_DEFROUTE =yes
#IPV6_FAILURE_FATAL =no
#IPV6_PEERDNS =yes
#IPV6_PEERROUTES =yes
#IPV6_PRIVACY =no

/etc/sysconfig/network-scripts/ifcfg-bond0_slave_1

Hwaddr=52:54:00:9c:32:50
Type=ethernet
Name= "bond0 slave 1"
#UUID =97B83C1B-DE26-43F0-91E7-885EF758D0EC
Onboot=yes
Master=bond0
#MASTER =9D1C6D47-2246-4C74-9C62-ADF260D3FCFC
Slave=yes

/etc/sysconfig/network-scripts/ifcfg-bond0_slave_2

hwaddr=52:54:00:ce:b6:91
Type=ethernet
Name= "bond0 Slave 2"
#UUID =2bf74af0-191a-4bf3-b9df-36b930e2cc2f
Onboot=yes
Master=bond0
#MASTER =9D1C6D47-2246-4C74-9C62-ADF260D3FCFC
Slave=yes

NMCLI Device Disconntctifname
NMCLI connection Reload [ifname]
Nmcli Connecton UP ifname

Route-n => must go to bond, not slaves

Also make sure default route is present
If not, add to/etc/sysconfig/network:gateway=xx.xx.xx.xx

To the team interfaces:

DNF Install-y TEAMD Networkmanager-team

Then configure team interface with NetworkManager Giu

bonded/teamed interfaces in most setups don't-provide increaseddata speed or increased bandwidth from-one node to another . Theyprovide a failover and may provide the increased aggregate bandwidth for concurrent connections to multiple target host S (but not to the same target host). However, there is a couple of workarounds:

Option 1:

Use bonding mode=4 (802.3AD)
Lacp_rate=0
Xmit_hash_policy=layer3+4

The latter hashes using src-(Ip,port) and dst-(Ip,port).
Still not good to a single connection. Option 2:create separate VLAN for each port (in each of the nodes) and use bonding mode = Adaptive load balancing.

Thenlacp-compliant Bridge would consider links separate and won ' t try tocorrelate the traffic and direct it via a single Li NK according Toxmit_hash_policy.
However this would reduce somewhat failover capacity:for example if Node1.linkvlan1 and node2.linkvlan2 both.
Italso requires that all peer systems (such as ISCSI servers, ISNS, etc.) Have their interfaces configured accordingly to the Samevlan scheme.
Remember to enable Jumbo Frames:ifconfig Ethx MTU 9000.


Prepare:

Names VC1, VC2, and vc3 below are forcluster.

On each node:

# Set node name
Hostnamectlset-hostname VCX

# disable "Captive Portal" detection in Fedora
Dnfinstall-y Crudini
Crudini--set/etc/networkmanager/conf.d/21-connectivity-local.conf Connectivityinterval 0
Systemctl Restartnetworkmanager


Clustershells

Install

DNF Install-y pdsh Clustershell

To use PDSH:

#non-interactive:
Pdsh-r exec-f 1-w vc1,vc2,vc3 cmd | Dshbak
Pdsh-r exec-f 1-w vc[1-3] cmd | Dshbak

#interactive:
Pdsh-r exec-f 1-w VC1,VC2,VC3
Pdsh-r exec-f 1-w Vc[1-3]

CMD substitution:

%h => Remote Host name
%u => remote User name
%n => 0, 1, 2, 3 ...
%% =>%
To set up for Clush, the password-less ssh.
Clumsy way:

SSH VC1
SSH-KEYGEN-T RSA

SSH vc1 mkdir-p. ssh
SSH vc2 mkdir-p. ssh
SSH vc3 mkdir-p. ssh

SSH vc1 chmod. ssh
SSH vc2 chmod. ssh
SSH vc3 chmod. ssh

Cat. Ssh/id_rsa.pub | SSH vc1 ' cat>>. Ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc2 ' cat >>.ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc3 ' cat >>.ssh/authorized_keys '
Ctrl-d

SSH VC2
SSH-KEYGEN-T RSA
Cat. Ssh/id_rsa.pub | SSH vc1 ' cat >>.ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc2 ' cat >>.ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc3 ' cat >>.ssh/authorized_keys '
Ctrl-d

SSH VC3
SSH-KEYGEN-T RSA
Cat. Ssh/id_rsa.pub | SSH vc1 ' cat >>.ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc2 ' cat >>.ssh/authorized_keys '
Cat. Ssh/id_rsa.pub | SSH vc3 ' cat >>.ssh/authorized_keys '
Ctrl-d
Cleaner Way:

Create id_rsa.pub, Id_rsaand Authorized_keys on one node,
Then replicate them to the other nodes in the cluster.
To use Clush:

Clush-w vc1,vc2,vc3-b [cmd]
Clush-w Vc[1-3]-b [cmd]
Basiccluster Install:

On each node:

DNF install-y Pcsfence-agents-all Fence-agents-virsh resource-agents Pacemaker

Optional:dnfinstall-y DLM lvm2-cluster gfs2-utils iscsi-initiator-utils lsscsihttpd

Systemctl Startfirewalld.service
Firewall-cmd--permanent--add-service=high-availability
Firewall-cmd--add-service=high-availability
Systemctl Stopfirewalld.service
Iptables--flush

# # Optionally disable SELinux:
#setenforce 0
#edit/etc/selinux/config and Change selinux=enforcing=> selinux=permissive

passwd Hacluster

Systemctl Startpcsd.service
Systemctl Enablepcsd.service

# Make sure no http_proxyexported
Pcscluster Auth vc1.example.com vc2.example.com Vc3.example.com-uhacluster-pxxxxx--force
e.g. PCs cluster auth vc1 vc2 vc3-u hacluster-p abc123--force

# created auth data isstored IN/VAR/LIB/PCSD

On one node:

PCS cluster setup [--force]--name vcvc1.example.com vc2.example.com vc3.example.com

PCS cluster start--all

To Stop:pcscluster stop--all

On each node:

# to Auto-start cluster onreboot
# Alternatively can manually do ' pcs cluster start ' on each reboot
Pcscluster Enable--all

To Disable:pcs cluster disable--all

View Status:

PCS status
PCS cluster status
PCs Clusterpcsd-status
Systemctl Statuscorosync.service
Journalctl-xe
Cibadmin--query
Pcsproperty list [--all] [--defaults]
Corosync-quorumtool-oi [-I.]
Corosync-cpgtool
Corosync-cmapctl [| Grepmembers]
Corosync-cfgtool-s
PCS Cluster CIB

Verify Current Configuration

Crm_verify--live--verbose

Start/stop node

PCS cluster Stop VC2
PCS status
PCS cluster start VC2

Disable/enable Hosting the node (standby state)

PCS cluster standby VC2
PCS status
PCS cluster Unstandby VC2

"Transactional" configuration:

PCs CLUSTERCIB My.xml # Get a copy Ofcib to My.xml
Pcs-f my.xml. Change command ... #make changes of config in My.xml
Crm_verify--verbose--xml-file=q.xml # Verifyconfig
PCS cluster Cib-push my.xml # Push config from My.xml to CIB

Configurestonith

All agents:https://github.com/clusterlabs/fence-agents/tree/master/fence/agents
Fence_virsh-fences machine via SSH to VM host and execuiting sudo virsh destroy<vmid> or sudo virsh reboot<vmid& Gt
Alternative to VIRSH:FENCE_VIRT/FENCE_XVM

DNF Install-y Fence-virt
Stonith is needed:
In resource (Non-quorum) Basedclusters, for obvious reasons

In Two-node clusters Withoutquorum disk (a special case of the above), for obvious reasons

inquorum-based clusters, because Linux clustering solutions Includingcorosync and Cman Run as user-level processes and are Unable tointerdict user-level and kernel-level activity on the node when Clusternode losesconnection to majority-votes PA Rtition. By comparison, in Vmscnxman are a kernel component which makes all CPUs to spin iniopost by requeueing the Req

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.