Chapter 3 preparing a site for VCs implemenation

Source: Internet
Author: User
Tags failover
Chapter 3 preparing a site for VCs implemenation
Prepare VCs implementation
Objectives:
Plan Implementation
Hardware requirements and recommendations
Software requirements and recommendations
Prepare cluster information Execution requirements
-Network, system, and application manager requirements for configuration and Testing
-In the future, cluster operations and managers can call
Physical access requirements are based on security policies
Access resources, such as Veritas, operating system, application vendor phone number, and web site Execution Plan
It is recommended that you prepare a detailed work table in the VCs installation and configuration. Use design work sheet

Cluster Definition

Value

Cluster name

VCs

Required attibutes

 

Usernames

Admin = Password

Clusteraddress

192.168.3.91

Administrators

Admin

System definition Value
System S1
System S2

Hardware requirements and recommendations
The hardware compatibility list (hardware compatibility list -- HCl) is located at the Veritas site.
Network:
VC requires at least two heartbeat channels for cluster interconnect, which must be an Ethernet network connection, which may use a single network and a disk heartbeat. The best practice is to configure two or more network connections.
The loss of cluster interconnect results in downtime. The loss of cluster interconnect on non-fencing platforms can lead to the separation of computer conditions.
For high availability configuration, each system in the cluster has at least two independent physical Ethernet connections for the cluster Interconnect
1. Two system clusters use crosstab (crossover)
2. clusters has three or more hubs or switches required by the system
3. You can use a layer-2 switch, but this is not required.
Note: For a cluster that uses Veritas cluster file system or Oracle Real Application cluster (RAC), Veritas recommends using multiple Gigabit Interconnects and Gigabit Switches
Shared storage:
VCs is designed to be the first choice as a shared data High Availability product, but you can configure a cluster with non-shared storage
These requirements and recommendations are considered for shared storage clusters:
1. For a non-shared disk with at least one HbA card, such as a system (BOOT) disk, it is recommended to use two HbA cards to connect to an internal disk and mirror the system disk to eliminate single node faults.
2. For a shared disk, there must be at least one HbA card:
-To eliminate single node faults, it is recommended to use two HbA cards to connect to the shared disk and use dynamic multi-channel software, such as Veritas Volume Manager DMP.
-Using multiple single-port memory cards or SCSI controllers is better than multiple interfaces to avoid Single Node faults
3. shared storage in a San must reside in the same zone as all nodes in the Cluster
4. data residing in shared storage can be protected by images or hardware raid
5. Use redundancy and Path
6. SCSI controller Configuration Requirements
If a shared disk array is used, the SCSI Controller on each system must be configured so that no device conflicts with the SCSI bus.
Cabling SCSI devices
1. Disable all systems in the cluster.
2. If the cluster has two systems, the cable is connected to the shared device on a SCSI chain.
3. If the cluster is more than 2 systems, prohibit the SCSI Terminator from located at the last position of the SCSI chain on the system.
Solaris:
1. # EEPROM | grep SCSI-initiator-ID
Check the SCSI initiator ID in each system
2. OK probe-SCSI-all
If necessary, connect to the shared disk in the system and check the SCSI IDs of the disk device. In OK mode
3. On the shared SCSI bus, select a unique scsi id for each system
Note: SCSI is designed to view and respond to requests from SCSI IDs from 7 to 0, then 15 to 8. Therefore, use a high-priority ID for each system and a low-priority for the ID of the device, such as disk, such as using 7, 6, and 5 for the system, and use a reserved ID for the device
(A) If the SCSI-initiator-ID has set a unique value, you do not need to make any changes.
(B) If you need to change the scsi id of each system, bring the system to the OK mode.
Setenv SCSI-initiator-ID ID
OK setenv SCSI-initiator-ID 5
Note:
-You can also modify this parameter without suspending the system by using the command: eeprom scsi-initiator-id = 5, but the change will not take effect until you restart
-Because this command changes the SCSI IDs of All controllers on the system, you need to ensure that the devices do not conflict with non-shared controllers.
4. OK boot-R
Note: When this is a very fast and effective method, it changes the scsi id of All controllers on the system. For how each controller on the system controls individual SCSI IDs, see Veritas Cluster Server Installation Guide Hardware verification
Network:
The test network connection uses the network address allocated at zero time and telnet or ping to Verify communication.
Therefore, depending on the operating system, you can ensure that the network interface speed is set by both parties and automatic negotiation is disabled.
Storage:
In order to failover an application from one system to another, both systems must have access to data storage.
When checking the hardware, consider the following:
1. Configure the zone on the San optical fiber switch
2. Active-Active vs. active-passive on the disk array Necessary software conditions and recommendations
For the final software requirements, see Veritas Cluster Server Release Notes for the specified operating system. You can also obtain the path platform parameters,/sbin,/SUR/sbin, And/opt/vrtsvcs/sbin Directories Before installation from the Veritas site, and manually add the manpath parameters.
It is recommended to configure and manage the cluster as follows:
1. Operating System: although it is not precisely required to run the same system version on all cluster systems,
2. Software Configuration: Set the configuration of the same operating system to help confirm that your application service is fully running. It is started on all cluster systems or failover targets for the service
3. Volume management software: use storage management software, such as Veritas Volume Manager and Veritas File System. To enhance high availability, you can enable image data for redundancy and configuration changes, or the physical disk does not interrupt services.
4. Set the/etc/default/KBD file to avoid abnormal interruptions on the Solaris system.
Keyboard_abort = Disable
After setting, reboot is required.
5. enable SSH/rsh communication between systems Software Verification
1. Confirm the patch required by the system and install VCs
2. Obtain the VCs license
3. Verify the operating system and network configuration file Prepare cluster information
1. system name in Cluster
2. Cluster name (a-Z, A-Z)
3. Unique ID number for cluster (0-255)
4. For the device name used for the network interface above cluster Interconnect
You can also configure additional cluster services during installation.
1. Add the VCs user account and password, or change the default admin account
2. Web GUI: You can specify a network interface and virtual IP address on the public network during installation.
3. Notification: You can specify SMTP and SNMP information during installation, and VCs configure cluster notification service to switch from: http://hi.baidu.com/nitar/blog/item/33b18ef2a0366ac70b46e0ad.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.