Building an iSCSI file server failover Cluster

Source: Internet
Author: User
Tags node server to domain

Failover Clustering (Failover Cluster) can provide a network environment for a highly available application or service, and this chapter will accept how to build an iSCSI San file server failover cluster.

Failover Clustering Overview

We can make a failover cluster of multiple servers that work together to improve a highly available application or a cluster in which each server is referred to as a node, and the nodes are connected to the user through network hardware and software, a process known as failover, It allows users to continue to enjoy the services provided by the server without interruption.

It's too hot, the other is not happy to write, write some notes, and then write the experimental process.

cluster quorum configuration

When a node in a cluster fails, it is taken over by the other node to continue serving. However, when there is a problem with communication between nodes or if too many nodes fail, the cluster will stop the service, but the cluster can tolerate how many nodes fail, which is determined by the quorum configuration (Quorum config), Quorum Chinese is the legal quantity meaning, That is, as long as the number of nodes that are still functioning in the cluster reaches the legal number, the cluster will continue to provide services or stop the service. During the stop service, the still-normal node continues to listen for the failed node to return to normal, and once the number of normal nodes is restored to the legal quantity, the cluster will continue to provide the server.

When calculating the legal quantity, some quorum configurations are used for the quorum disk, also known as the witness disk (Witness disks). The quorum configuration is divided into the following types.

    • Most nodes (node Majority)

      This configuration does not use the quorum disk, and the so-called majority node is the majority of the number of normal nodes, the cluster will provide services, or stop the service. This configuration applies to clusters of for odd pages nodes. For example, for a cluster of 5 nodes, the number of normal nodes must be at least 3, and the cluster will not provide services.

    • Most nodes and disks (node and disk Majority)

      A cluster for even-numbered nodes, which calculates the quorum disk in the amount of legal quantities, for example, a cluster of 4 nodes + 1 quorum disks, which can be treated as a cluster of 5 nodes, at which time the number of normal nodes must be at least 3, and the cluster will serve.

    • Majority node and file sharing (node and files Share Majority)

      It is similar to a disk for most nodes, but changes the quorum disk to a file within a shared folder.

    • No majority: Only disk (no Majority:D ISK only)

      As long as the quorum disk is offline, the cluster will stop providing services (not recommended).

Create a failover cluster instance demo

We will demonstrate how to build a file server failover cluster for an iSCSI San two node whose quorum configuration defaults to most nodes and disks.

Hardware and software requirements

To create a two-node cluster, its hardware and software must meet the following requirements. You can experiment with my previously obtained VMware workstation, or you can experiment with Hyper-V because the Windows 2012 experiment now does not involve virtualization.

Server

The cluster requires an ad domain, and the node servers in the cluster must belong to the same domain and should play the same domain role, that is, the member servers or both are domain-controlled. Our environment domain name contoso.com, upper left DC is domain controlled, Node1 and Node2 two nodes, the bottom server is the target server, which will install Windows Server2012 built-in iSCSI target Server software, no need to add domain.

In order for the cluster to function properly, both node servers in the cluster should be equipped with the same or similar hardware, with the same version of the operating system installed, with the same hardware version (32-bit, 64-bit), and the same SP and software updates installed. In Windows Server 2012 System products, only Datacenter and Enterprise editions have cluster functionality, so 2 node servers have Windows Server2012 Datacenter installed. For convenience, both the domain control and the target server are installed datacenter.

Network settings

Each of the two node servers in the figure has three network cards, each connected to the Public,private and iSCSI networks.

    • Public Network : Two node servers each have a NIC connected to the public network, and through the public network and the domain control communication, the client also through public to connect the node server.
    • private network : The two node servers need to listen to each other's heartbeat (Heartbeat) at any time in order to know their heartbeat state, so it is recommended that the nodes communicate through a private network.

      In order to improve the ability to recover, we will set up the node can also be through the front of the public network to communicate, she can not communicate between nodes through private network, but also choose the public network.

    • iSCSI Network : Each of the two nodes has a single NIC connected to the iSCSI network, which connects to the target server and accesses the files within the storage media, which should be used only for private networks that communicate with the iSCSI protocol between the node and the target server, and do not use it for any other purpose. The network card used by two nodes to connect to the iSCSI network should be the same, and the iSCSI network should use a high speed switch (1g,10g, or higher).

Note: In order to avoid a single point of failure affecting the operation of the cluster, it is recommended that between two nodes and the client, two nodes and the target server communication pipeline between the appropriate recovery measures, such as between the node and the client, you can use two network cards in the node to connect to two networks, Both networks can communicate with the client. You can also use a network card with the teaming function to provide fault recovery functions, such as the public network can be used teaming NIC, but do not use the private network teaming network card, in order to avoid delay due to the problem of communication between the nodes of real-time performance. In addition, iSCSI does not support teaming, so the iSCSI network does not use the teaming network card. Teaming is a computer in a number of network cards, through the driver to turn it into a virtual network card, the other computer through this virtual network card to communicate with this computer, but the data is actually transmitted through a number of physical network cards. The teaming network card can improve the transmission speed and provide load balancing and fault handling functions. Windows Server2012 has built-in NIC teaming functionality.

Target server and storage media

Windows Server 2012 requires storage media to comply with SCSI Primary Command-3 (SPC-3) standards, and in particular must support SPC-3 persistent reservation (SPC-3 continuous retention). We want to leverage the iSCSI target server built into Windows Server 2012 to build an iSCSI target server that supports the above SPC-3 requirements through the iSCSI disks that he creates.

This example is a two-node file server cluster, so the quorum is configured as a majority node and disk, in addition to the file disk where the file is stored, a quorum disk is required, both of which require the following:

    • Must be a basic disk and cannot be a dynamic disk.
    • The quorum disk must be formatted as NTFS, but the file disk is not mandated, but NTFS is recommended.
    • The disk partition style can be MBR or GPT.
    • The quorum disk and file disks in this column are simulated directly using the files in the local computer C: disk, with the filenames C:\iSCSIVirtualDisks\Quorum.vhd and C:\ISCSIVIRTUALDISKS\FILES.VHD
ISCSI San two node file server cluster instance Demo

We step-by-step to explain how to build clusters to reduce the chance of error.

Prepare the network environment with 4 servers
    • The network card of the DC is connected to the public network, the network card of the target server is connected to the iSCSI network, and the 3 fast network card of two nodes connects 3 networks respectively.

    • The target server must have more than two disks to prepare
    • Install Windows Server2012 on these 4 servers datacenter

      After the installation is complete, change to Dc1,node1,node2,target.

    • It is recommended that you change the connection name of the 3 NICs within the two node servers for easy identification.
    • Be sure to perform a ping test to confirm that the server can communicate properly.
    • Upgrade the DC to domain control.
    • Add Node1 and Node2 to the domain separately.
    • Node1 and Node2 Firewall exceptions Open Remote volume managed traffic, or you cannot create shared folders within the cluster.
The appropriate node settings

To make the cluster run more efficiently, it is recommended that you adjust the settings for both node servers. For example, in order for two node servers to be able to listen to each other's heartbeat in real time through the private network, you should avoid sending other unrelated traffic on the network, and iSCSI networks are used only for private networks that use iSCSI protocol to communicate between the node and the target server. Therefore, you should also avoid sending other unrelated traffic on this network. For example, it is best to disable DNS and WINS traffic for the network connection on the node server that is connected to both networks.

    1. Login Node1 Open the private network card to remove file and printer Sharing for Microsoft network clients and Microsoft Networks, using only IPv4

Note: Microsoft network clients are used to access shared files and printers from other computers on the network, and file and Printer Sharing for Microsoft Networks allows other computers on the network to access shared files and printers on the local computer. Because there is no need for this type of interaction with other computers through private and iSCSI networks, it can be disabled.

    1. Do not set up DNS

    1. Uncheck Register this connection's address in DNS

    1. Canceling NetBIOS-related features

    1. The above settings are also repeated for iSCSI NICs
    2. Because both private and iSCSI networks have a dedicated purpose, we need to set all other types of traffic to take precedence over the public network.

    1. Continue to Node2 repeat the above steps

Note: Two nodes need to install the same update, however, it is not recommended to start the Automatic Update feature, but is manually updated by the system administrator to ensure that two nodes have the same update.

Settings for target server and storage media
    1. To add a server role to the destination server, the iSCSI target server

    1. Click File and Storage services

    1. Create a new iSCSI virtual disk

    1. Select disk Path

    1. Enter the disk name

    1. Enter the size of the disk, we experiment environment small point, there is a saying, the size of the quorum disk and the machine's physical memory

    1. We need to assign this virtual disk to an iSCSI target, and the iSCSI initiator installed on Node1 and Node2 needs to connect to the virtual disk through the iSCSI target. Because there are currently no iSCSI targets, you must create a new one.

    1. Enter a name

    1. Add iSCSI Initiator

    1. Enter the Node1 iSCSI card IP Address

Note: The DNS name of the node can also be selected at the type, and the MAC address or the IQN,IQN of the iSCSI initiator must be queried at node nodes first (after starting the iSCSI initiator, located under the Settings tab under the initiator name).

    1. Add Node2 network card IP

    1. The Authentication service interface appears directly click Next

Note: You can require the iSCSI initiator side (cluster node) to be authenticated before you can connect to the iSCSI virtual disk by checking enable CHAP in the Enable Authentication Service interface, and then setting the user name and password. You must provide the user name and password here at the iSCSI initiator side to connect to the iSCSI virtual disk.

Conversely, the iSCSI initiator can also verify the iSCSI target by checking enable reverse chap in the Enable authentication Service interface and entering the user name and password specified by the iSCSI initiator side.

    1. When you are sure that the settings are correct, click Create

    1. When finished, the top half is the iSCSI virtual Disk created, and the lower half is the iSCSI target

Note: If you want to change the settings for an iSCSI virtual disk or iSCSI target, you can right-click the iSCSI virtual Disk or the iSCSI target and then set the shortcut menu to pop up.

    1. Click New iSCSI virtual Disk to create another virtual disk
    2. Name the disk

    1. Set Disk size

    1. Assign to disk an iSCSI target, create a new iSCSI target

    1. Enter name files

    1. Add the Node1 and Node2 initiators with the same IP as before

    1. Create after verifying that the configuration is correct

    1. Complete

Note: The iSCSI target uses a connection port number of TCP 3260, and when the iSCSI target is installed through Server Manager, the system has automatically opened port traffic on the firewall.

Enable a node server to connect to an iSCSI virtual disk

Set up iSCSI initiators on two node servers to connect to the target server's target and then access the iSCSI virtual disks through the target.

    1. First to node NODE1 management Tools-ISCSI Initiator, start iSCSI service

    1. Click Discover Portal

    1. Enter the destination server IP. Default connection Port 3260

Note: If the connection fails, make sure that Windows Firewall for the destination server is down or has an exception to developing iSCSI traffic.

    1. Click the link after selecting the target to connect to under the Target tab

    1. Click Confirm Directly

Note: If authentication is required on the target server side, click Advanced to enable CHAP login and enter the user name and password.

    1. Interface after successful connection to iSCSI target

    1. Repeat the above steps after connecting to another disk, all connections are completed and entered

(We only saw a disk in front, and later checked, found that the quorum disk iSCSI target Node1 IP wrong, so it is not shown)

    1. Turn on disk Management and bring two disks online

    2. Initialize Disk

    3. Create a new simple volume and assign the drive letter Q and F

    1. You can add files to the two disks directly on the node Node1 to test that the two hard disks can be accessed properly.
    2. Re-perform the above steps on the node server, but do not proceed from step 8th to prevent data loss.
To install the Failover Clustering feature on a node server

Install the Failover Clustering feature to two node servers, add features-failover clustering

Verifying cluster settings

Before you begin to create a cluster, it is strongly recommended that you perform a cluster validator that checks the node server, network and storage media, and so on to meet the requirements of the cluster.

    1. In Node1 or Node2 open failover Cluster Manager, click Verify Configuration

    1. Input Select the nodes to validate Node1 and Node2

    1. Select Run All Tests

    1. Confirm that the item you want to verify is correct and click Next

    1. If full validation is passed, you can click the Finish button to start creating the cluster

Note:

    • If the validation results are only warnings, it may not affect the creation of the cluster, for example, if the cluster nodes are zhi through a network card (not through multiple network cards or without binding functionality, etc.), the Validation wizard lists the warning messages, but it does not affect the creation of the cluster.
    • If the validation results show that there are other failed events that fail validation, then troubleshoot the issue and then re-verify that the cluster that you created may not function correctly.
Create a cluster

We will use the Cluster Creation Wizard to create the cluster.

    1. Enter the name and IP of the join cluster (it belongs to the public network). You will manage the cluster through this IP address, and the cluster name and IP will be logged to the DNS server.

    1. Verify that the configuration is correct after the next

    1. The Summary screen appears click Finish
    2. For the finished interface, it can be seen from the interface that its quorum configuration is automatically set to the majority node and disk, because this cluster is an even node (two nodes). Cluster disk 1 in parentheses indicates that the quorum disk is the 1th disk in the cluster.

Note: If you want to use a different disk to play the quorum disk: Click More actions on the right side of the cluster-configure cluster quorum settings.

Configure a two-node file server failover cluster to configure the purpose of the cluster network

We want to adjust the purpose of the PUBLIC,PRIVATE,ISCSI network within the cluster.

    • Public Network : We want the client to communicate with the cluster nodes through this network, and also to allow communication between the cluster nodes through this network (as a standby network for private networks).
    • private network : This network is intended to be used for communication between cluster nodes.
    • iSCSI Network : It is a private network where the cluster nodes communicate with the target server by using the iSCSI communication protocol, which cannot be used as a network for communication between cluster nodes and, of course, is not intended to communicate with clients.
    1. Open Cluster Administrator Select Network

Note: If you do not see the cluster that you want to manage in the window, click Connect to the cluster, and then select the cluster that you want to manage.

    1. On the network that represents public, click Properties-Select Allow cluster network traffic on this network. Tick check box

    1. Check the box on the network that represents private to allow cluster network traffic on this network

    1. Check the network on behalf of iSCSI does not allow cluster network traffic on this network

Create a file server with a test two node the required traffic for a failover cluster to open in Windows Firewall

Before you create a file server failover cluster, you need to have Windows Firewall open Remote volume management traffic on both node servers, or you will not be able to create shared folders within the cluster. Since two nodes communicate through the public network, first we need to find out the network location of the public network, and then only need to open the remote volume management for this network location.

    1. Open Network and Sharing Center on Node1 to see the location of the public network as a domain network

    1. Open Windows Firewall, select Allow apps or features to pass through Windows Firewall

    1. Tick remote volume management and tick the domain

    1. Continue to Node2 to repeat the above steps
Create a file server failover cluster

Two node servers need to install the File Server role service first.

    1. Add roles-File and iSCSI services-file servers

    1. Open failover cluster-configure roles

    1. Select a file server

    1. Click Next

    1. Name the file server and set the IP address (as you would create the cluster)

    1. Tick the disk you want to assign to the file server to use

    1. Click Next after confirming the error

    1. Click Finish

    1. After the completion of the interface, it is known that the server is currently the owner of Node1, so when the client connection, there is Node1 to provide services

    1. Click Add File share to the right of file server Myclusterfs

Note: If Windows Firewall for two node servers is not shutting down or if you are not developing remote volume management traffic, you will not be able to add shared folders at this time.

    1. Click Next

    1. Click Next to take the default, and he will set the specified folder in F:\shares as the shared folder. You can also make your own folder path.

    1. To set the share name, such as database, the system meets F:\Shares\Database to the shared folder. Clients can access this folder through \\MyClusterFS\Database.

Note: Use the Failover Cluster Manager console to share folders within the cluster disk, not through other methods, such as through File Explorer.

    1. Click Next directly

    1. Click Next directly to take the default permissions setting (open Full Control permissions to the system administrator, develop read Write permissions to the Sun user, etc.). If you want to change permissions, you can click the Customize Permissions button.

    1. Click the Create button when the Confirm Selection item interface appears, click Close when finished
    2. After the completion of the interface

    1. To the client computer to test whether the file server shared folder can access files within the database, we use DCS to access and create folders to add files.

    1. Attempt to move the node to Node1.

Note: If a node is paused, the role currently owned by this node will continue to serve, but you cannot transfer the roles owned by other nodes to the suspended node.

    1. You can further verify the failover functionality of the cluster by shutting down the current owner, and another node automatically detects that the owner is offline, so he will accept the service to the client, so that the file you just created can be accessed on the client.

Note: If you want the cluster to stop servicing the client-right-other operations on the cluster-turn off the cluster and later choose to start the Cluster service to restart.

Add nodes in the cluster, remove nodes and remove cluster add nodes
    1. Pause the file server by clicking the file server Myclusterfs right stop role

    1. Complete all settings on the new node server: such as installing a network card, setting up a firewall, and so on.
    2. Verify that this line node meets the requirements by using Failover Cluster Manager validation settings on other nodes
    3. When you are done verifying, click Add node to add

    1. Once added, the interface will prompt the number of nodes to become odd (3 nodes), so you should change the quorum configuration (for example, to node majority). Configure cluster quorum settings-use the default settings to let the system automatically determine the quorum configuration.

    1. Click File Server Myclusterfs to the right to enable the role to restart the server

removing nodes

If you want to remove a node server from the cluster, first see if the node is the owner of the file server, and if so, move to a different node.

The node is then removed by eviction.

Deleting a cluster

Follow these steps to remove the cluster.

    1. Before you can delete a cluster, you must delete the roles within the cluster.

    1. then destroy the cluster

    1. After the cluster is removed, it is recommended to confirm on the domain that the computer object within the Computers container that is the same as the cluster name is disabled or does not exist, or the next time a cluster with the same name is rebuilt, or a warning appears.

Building an iSCSI file server failover Cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.