The following step-by-step article describes how to configure shared disk Cluster for SQL Server on a red Hat Enterprise Linux system and its associated use (for test-only learning, basic article)
I. Creating shared disks and Cluster
Microsoft Official configuration document: Https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-shared-disk-cluster-red-hat-7-configure.
The Linux cluster structure diagram is as follows:
The specific configuration steps are as follows:
1. Installing and Configuring SQL Server
A) first install two SQL Server as two nodes of cluster, please refer to the blog post "SQL Server on Red Hat Enterprise Linux--rhel (Full)" (if more nodes are required to install more);
b) Stop and disable the SQL Server service on the secondary side:
sudo systemctl stop mssql-serversudo systemctl disable Mssql-server
c) Back up synchronization server Master Key (because SQL Server in Linux runs on the local user MSSQL, so different nodes cannot recognize the authentication of the other nodes, So you need to back up the sync encryption Key from the primary to the other nodes to be able to successfully decrypt the server Master Key):
- Secondary-side backup of the original Machine-key:
sudo su /var/opt/mssql/Secretsmv Machine-key Machine-key.original.bak
- The primary end copies the Machine-key to the secondary side:
sudo su /var/opt/mssql/secrets/SCP machine-key [email protected]**<secondary Node IP Address>**:/var /opt/mssql/secrets/
- The secondary side checks for a successful backup and adds the relevant permissions:
Ls
chown Mssql:mssql Machine-key
d) Create a SQL login user on the primary side for the Pacemaker program and give sufficient privileges to run the sp_server_diagnostics.
Open the SQL Server service first:
sudo systemctl start mssql-server
Connect to SQL Server:
Sqlcmd-s localhost-u sa-p **<your password>**
Execute the following SQL statement to create a user and give permissions:
Use [master] CREATE [<loginName>] with PASSWORD= N'<loginPassword>'GRANTVIEW to <loginName>GO
Exit Sqlcmd:
Exit
e) Stop and disable the SQL Server service on the primary side:
sudo systemctl stop mssql-serversudo systemctl disable Mssql-server
f) Configure each node's Hosts file to ensure mutual recognition.
sudo VI /etc/hosts
Is the example after the configuration is complete:
2. Configuring shared disks and transferring database files
There are a number of solutions that provide shared disks. The following is a brief description of the shared disk configured for NFS. It is recommended to use Kerberos to configure NFS for increased security: https://www.certdepot.net/rhel7-use-kerberos-control-access-nfs-network-shares/. Here are just some of the simplest ways to use for simple testing and learning.
Configuring Shared Disks with NFS
To find another RHEL system machine as an NFS server, execute the following command (because it is only a test study, here a node of cluster is selected as NFS server):
A) Install the NFS package:
sudo Yum Install nfs-utils
b) Enable and turn on the Rpcbind service:
sudo systemctl enable Rpcbind && systemctl start Rpcbind
c) Enable and turn on the Nfs-server service:
sudo systemctl enable Nfs-server && systemctl start Nfs-server
d) Edit the/etc/exports file to set the storage path that you want to share, and note that each share is a row:
VI /etc/exports
The following examples are set up:
e) Export the share and determine whether it is successful:
sudo exportfs-Ravsudo showmount-e
f) Add the exception setting in SELinux:
sudo 1
g) Allow related services to communicate in the firewall:
sudo firewall-cmd--permanent--add-service=NFSsudo firewall-cmd--permanent-- add-service=mountdsudo firewall-cmd--permanent--add-service=rpc-bindsudo Firewall-cmd--reload
To set up NFS for all cluster nodes
Perform the following command on all cluster node machines to ensure access to the NFS shared disk:
A) Install the NFS package:
sudo Yum Install nfs-utils
b) Allow related services to communicate in the firewall:
sudo firewall-cmd--permanent--add-service=NFSsudo firewall-cmd--permanent-- add-service=mountdsudo firewall-cmd--permanent--add-service=rpc-bindsudo Firewall-cmd--reload
c) Confirm that you can see the NFS share:
sudo showmount-e **<ip of NFS server>**
For more documentation resources on NFS, refer to the following sites:
- NFS Servers and Firewalld | Stack Exchange
- Mounting an NFS Volume | Linux Network Administrators Guide
- NFS Server Configuration
Set the database file path to a shared disk
Transfer the database files to the shared disk:
A) on the primary node, save the database file to the temporary path/var/opt/mssql/tmp,
su MSSQL mkdir /var/opt/mssql/tmpcp /var/opt/mssql/data/* /var/opt/mssql/tmprm/var /opt/mssql/data/*exit
b) Edit the/etc/fstab file on all nodes to ensure that the NFS shared disk is automatically mounted after rebooting the system:
<ip of NFS Server>:<shared_storage_path> <database_files_directory_path> NFS timeo=, Intr
Examples are as follows:
Note (excerpt from Microsoft):
c) Mount the NFS storage that you just configured:
sudo Mount -A
You can perform the Mount command to detect if it has been successfully mounted:
d) Copy the database file under the temporary path to the new mount path on the primary node, and ensure that the local user of MSSQL has read and write privileges:
chown mssql/var/opt/mssql/datachgrp mssql/var/opt/mssql/datasu MSSQLCP /var/opt/mssql/tmp/* /var/opt/mssql/data/rm/var/opt/mssql/tmp/*exit
e) If the SQL Server service validation was successful on the primary node, SQL Server already uses the shared disk on the NFS server:
sudo systemctl start mssql-serversudo systemctl status mssql-server sudo systemctl stop Mssql-server
f) on other non-primary nodes, turn on the success of SQL Server service validation.
Note: SQL Server for all nodes currently uses this NFS server share disk, according to Microsoft recommendation, in order to prevent conflicts, you need to use a file System cluster resource to prevent a shared path from being mounted multiple times.
3. Installation and Configuration pacemaker
Under all cluster nodes, perform the following operations in turn:
A) Create a file to save the password information for the SQL Server login user account created before pacemaker:
sudo Touch/var/opt/mssql/secrets/passwdsudo Echo '<loginName>'>>/var/opt/mssql/secrets/passwdsudo Echo '<loginPassword>'>>/var/opt/mssql/secrets/passwdsudo Chownroot:root/var/opt/mssql/secrets/passwdsudo chmod -/var/opt/mssql/secrets/passwd
b) Set the firewall to allow Pacemaker service communication:
sudo firewall-cmd--permanent--add-service=high-availabilitysudo firewall-cmd--reload
Note: If you are using a different firewall tool, you need to open the following port.
Tcp:ports 2224, 3121, 21064
Udp:port 5405
c) Install the Pacemaker package:
sudo Yum Install pacemaker PCs Fence-agents-all resource-agents
d) Set the password for the default user Hacluster created when installing pacemaker and Corosync:
sudo passwd Hacluster
e) Enable and turn on the PCSD service, and enable pacemaker:
sudo sudo systemctl start PCSD sudo systemctl enable pacemaker
f) Install the FCI resource agent:
sudo Yum Install mssql-server-ha
4. Create cluster
The following formally creates cluster:
A) Create the cluster on the primary node:
sudo pcs cluster auth **<nodename1 nodeName2 ... >**-u haclustersudo pcs cluster setup--na Me **<clustername>** **<nodename1 nodeName2 ... >**sudo pcs cluster start--all
Examples are as follows:
b) Currently, CTP 2.1 does not have the Hyperv and cloud environment for fencing, so temporarily need to disable the fencing function (not recommended to disable in the production environment)
sudo PCs property set stonith-enabled=falsesudo PCs property set start-failure-is-fatal= false
c) To configure the relevant resource information for cluster, you may need to set the following information:
SQL Server Resource name--sql Server resource Name,
Timeout Value--cluster waits for a resource to time out, and if it is SQL Server, it is the time that the master database starts up.
Floating IP Resource name--The name of the virtual IP resource,
IP address--is used to connect the IP address of the SQL cluster instance.
File System Resource name--the name of the filesystem resource,
Device--nfs shared Path,
directory--Local Mount Path,
fstype--file share types, such as NFS.
The script is as follows:
sudo PCs cluster CIB cfg sudo pcs-f cfg resource create **<sqlserverresourcename>** OCF:MSSQL:FCI op defaults timeou T=**<timeout_in_seconds>**sudo pcs-f cfg resource Create **< floatingipresourcename>** ocf:heartbeat:IPaddr2 ip=**<ip address>**sudo pcs-f cfg resource create **<fileshareresourcename>** Filesystem device=**<networkpath>** directory=**<localpath>** fstype=**<filesharetype>**sudo pcs-f CF G constraint colocation add **<virtualipresourcename>** **<sqlresourcename>** sudo pcs-f cfg constraint colocation add **<fileshareresourcename>** **< sqlresourcename> **sudo PCs cluster cib-push cfg
Examples are as follows:
sudoPCs cluster CIB cfgsudoPcs-f CFG resource Create Mssqlha OCF:MSSQL:FCI op defaults timeout=60ssudoPcs-f CFG resource Create Virtualip OCF:HEARTBEAT:IPADDR2 ip=10.2.38.180sudoPcs-f CFG resource Create FS Filesystem device="10.2.38.178:/mnt/nfs"directory="/var/opt/mssql/data"Fstype="NFS"sudoPCS-f cfg constraint colocation add virtualip MssqlhasudoPCS-f CFG constraint colocation add fs MssqlhasudoPCS cluster Cib-push cfg
When the configuration is complete, SQL Server runs on one of the cluster nodes.
d) Use the following command to confirm that the relevant SQL Server service in cluster is healthy:
sudo pcs status
is the normal startup condition:
e) After normal startup, you can access SQL Server with cluster virtual IP:
Note:
- If some resources start up with a problem, you can use the following command to diagnose the boot:
sudo ID>**
If there is no problem or if there is a problem fixing it, then restarting the Cluster service is a little bit longer:
sudo pcs cluster stop--allsudo pcs cluster start--all
2. Management and use of cluster (basic article)
- Microsoft Official Introduction: Https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-shared-disk-cluster-red-hat-7-operate.
Here for the time being only a common function failover, can transfer a resource to the destination node:
sudo pcs resource move **<sqlresourcename>** **<targetnodename>**sudoclear * * <sqlresourcename>**
- The shared disk cluster system built for SQL Server on Linux is not the same as Windows SQL cluster (so the title is "class") and it cannot be used to determine if it is cluster with SQL query. The following command is not currently detected as cluster and does not get any information:
Select serverproperty (' isclustered '); Select * from :: Fn_virtualservernodes (); Select * from Sys.dm_os_cluster_nodes,sys.dm_io_cluster_shared_drives;
From this site you can find the basis, but not sure whether the future changes: https://docs.microsoft.com/en-us/sql/linux/ Sql-server-linux-shared-disk-cluster-red-hat-7-configure.
- Pacemaker is an open source high availability cluster that is very mature on Linux. The following site describes the use of the pacemaker GUI: https://keithtenzer.com/2015/06/22/ Pacemaker-the-open-source-high-availability-cluster/?utm_source=tuicool&utm_medium=referral:
- Other relevant useful sites:
- Configuring the Red Hat High availability add-on with Pacemaker-fencing:configuring STONITH
- Pacemaker Access Control Lists
- How to create and edit text files using vi Editor, basic VI commands, Command mode, insert mode.
- Linux Mount and Umount
Note: The document is based on comparison and may continue to be updated in the future.
[Original articles, reproduced please indicate the source, for study purposes only, if there are errors please leave a message, thank you for your support]
Configure "class" SQL Server cluster features on the Red Hat Enterprise Linux shared disk cluster for SQL Server--rhel