RAC Knowledge Update-managing archive logs and modifying VIP under RAC (from wenpingshu)

Source: Internet
Author: User

Whether the database is single-node or cluster, the way it carries things is strictly in the order of timestamps. This determines
When restoring a database, you must read the log records in strict chronological order to restore database files. Therefore, when the cluster database is restored,
A complete log sequence is required.
When the database is running in archive mode, logs are "partially" managed on the instance, so the archiving operation is also "partially.
Because Global log files are required for recovery, how to archive a collection of logs in a common path is
An administrator's goal.
As mentioned above, AIX can set four possibilities for storing files on shared disks: gpfs (hacmp), NFS, ASM, or hacmp raw devices.
The hacmp raw device cannot be used as the archiving destination. Therefore, the file system is required for archiving. Therefore, the following three methods can be placed in a shared area:
Select:
Gpfs: place the archive file to gpfs.
ASM: place the archive file in the ASM disk group.
NFS: You can place an archive file to NFS.

With gpfs for archiving, you do not need to set the archiving process. You can simply set the archiving path to gpfs.
Specify the same log_archive_destn parameter. Because all instances in the cluster write archive logs to this directory
File summary. Of course, if another archiving target can be specified for the database on the node, such as pointing to local storage, the possibility of archive errors will be reduced.
Using the ASM disk group as archive storage (for example, using an ASM disk group as the flash recovery zone) is also a recommended method for Oracle. Archive logs are written to the ASM disk group pointed to by the db_recovery_file_dest parameter of each instance in the cluster, the disk group stores the total archived log data.
If NFS is used, each node in the cluster can mount the same NFS. Similarly, to improve security, you can set multiple archiving targets. For example, the first archiving target is a local directory and the second archiving target is NFS.
This is the archive storage structure currently selected by many users. If the archive is constructed in this way, you must note that the archive pointing to the NFS archive target should be set as non-emphasizing archive. When the archiving operation on NFS is mandatory, if a network fault occurs, the NFS Installation Point is no longer available,
The Archiving operation fails, which causes the database system to be suspended.

 

|

Under normal circumstances, the node application in the cluster will not be changed, and no subsequent configuration or changes are required. However, the srvctl tool also provides functions such as deleting, adding, and changing node applications. The following uses virtual address change as an example to describe common management of node applications.
The Virtual Address Service (VIP service) is a part of the node application. The address can still be modified after the system goes online and runs. Next we will
Node node_a virtual address from '192. 168.2.93 'to '192. 168.2.95 '.

Step 1: log on to the system as a root user, modify the/etc/hosts file on each node, and modify the binding of the virtual address as follows:
192.168.2.93 node_a-vip ==> 192.168.2.95 node_a-vip

Step 2: Stop the node application. Go to the bin directory under the CRS installation directory and run the following command:
#./Srvctl stop nodeapps-N node_a

Step 3: run the following command to delete the node application:
#./Srvctl remove nodeapps-N node_a

Step 4: Add a new node application for node node_a. The following command specifies the ORACLE_HOME location, VIP address, and mask:
#./Srvctl add nodeapps-N node_a-o/DB/Oracle/product/10.2.0/db_1-A 192.168.2.95/255.255.255.0

Step 5: restart the node application after the node application is added:
#./Srvctl start nodeapps-N node_a

Step 6: Check the running status of the node application. After installation or adding a new node, check whether the system configuration is correct and the node application is normal:
#./Srvctl status nodeapps-N node_a

$ Srvctl status nodeapps-N cctt1
VIP is running on node: cctt1
GSD is running on node: cctt1
Listener is running on node: cctt1
ONS daemon is running on node: cctt1

 

 

 

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.