Managing Oracle Cluster Registry (OCR)

Source: Internet
Author: User
Tags cdata

Oracle's Clusterware contains two important components: OCR (including local component OLR) and voting disks
--OCR managing configuration information for Oracle Clusterware and Oracle RAC databases
--OLR is located locally on each node and manages the Clusterware configuration information for the local node
--voting disks Manage member relationship information. Each voting disk must be accessible to all nodes in the cluster.

In 12C, the OCR and voting disks must be placed in ASM (12c does not support block devices and bare devices; 12.2 No longer supports other shared file systems).

(11g documentation reads: OCR and voting disk must be placed in ASM or in a certified cluster file system)

11.2, Oracle Oui does not support bare devices, block devices. However, if you upgraded from a previous version, you can continue to support block devices, bare devices. Oracle is certainly recommended to use ASM.

To increase availability, Oracle recommends configuring multiple voting disks files. If you are using an ASM disk group. ASM ensures that the voting disks is configured as normal redundancy or high redundancy. If you are using a different shared file system, you need to manually specify multiple settings.

You can dynamically add and replace voting disks without having to stop the cluster.


Tools for managing OCR and OLR are Ocrconfig, Ocrdump, Ocrcheck
OLR is similar to OCR, but is located locally on the node in the cluster and contains configuration information for a particular node. OLR contains clusterware manageable information, such as dependencies between different services, Ohas need to use this information. OLR Default storage path is GRID_HOME/CDATA/HOST_NAME.OLR.

1. Migrating OCR to ASM

If you upgraded from 11.2 to 11.2,asm disk compatibility must be set >=11.2;
If you upgrade to 12c,asm disk compatibility before 12c, you must set >=11.2.0.2.
(1) View the currently running version

$ crsctl Query CRS activeversionoracle clusterware active version on the cluster is [11.2.0.4.0]

(2) Start ASM on all nodes
(3) Adding OCR to ASM Disk Group

# Ocrconfig-add +new_disk_group

OCR inherits the redundancy of the disk group!

(4) Remove the previous storage configuration

# Ocrconfig-delete Old_storage_location

2. Migrating from ASM to other shared storage types

(1) View the currently running version

$ crsctl Query CRS Activeversion

(2) Add a new file as the location for OCR storage

# Ocrconfig-add File_Location

(3) Remove the original ASM configuration

# Ocrconfig-delete +asm_disk_group

 

3. Add an OCR location

# Ocrconfig-add +asm_disk_group | file_name

 

4. Remove an OCR location

# Ocrconfig-delete +asm_disk_group | file_name

  

5. Replace the OCR position

(1) First check

$ ocrcheck$ CRSCTL Check CRS

(2) Replace OCR position

# Ocrconfig-replace Current_ocr_location-replacement new_ocr_location
If there is only one OCR location, only add it first after deleting # Ocrconfig-add new_ocr_location# ocrconfig-delete current_ocr_location

  

6. Local Node Repair OCR

If the cluster has undergone a training change after the node has been closed, and the node is the only node in the cluster, an OCR repair is performed before the node is started.
Repair OCR includes adding, removing, and replacing OCR. For example:

# ocrconfig-repair-add/dev/sde1# Ocrconfig-repair-replace current_ocr_location-repalcement target_ocr_location

7. Heavy-Duty OCR

OCR has a mechanism to prevent data loss. If multiple mirrored OCR is configured, when Clusterware cannot access the mirrored OCR location or whether the currently accessible OCR location contains the latest configuration information, Clusterware will block modifications to the currently accessible OCR.
In addition, the process blocks the start of the Clusterware on that node. Alert logs for Clusterware and databases are subject to alarm information. If this problem occurs only on a node, you can start the cluster database from another node.
If any of the nodes in the cluster are unable to start, the user can choose to repair OCR or restore OCR. If neither repair nor restore is available, you can also choose to overload OCR. Overloaded OCR requires all OCR to be overloaded, but this situation can cause some information to be lost.
Fix it with Ocrconfig-repair. If you want to overload the Orc, use the command Ocrconfig-overwirte.
Before overloading OCR, you should try the OCR repair first.

8. Backup OCR
(1) Automatic backup
Clusterware automatically backs up OCR files every four hours. And will retain the last three copies of OCR. Backed up by a crsd. In addition, CRSD also makes daily and weekly backups. The backup frequency and retention period are not adjustable.
(2) Manual backup
Ocrconfig-manualbackup perform a backup. OLR only supports manual backups.
(3) View Backup

$ ocrconfig-showbackupdb2     2017/09/03 14:32:04     /u01/app/11.2.0/grid/cdata/oradb-cluster/backup00.ocrdb2     2017/09/03 10:32:03     /u01/app/11.2.0/grid/cdata/oradb-cluster/backup01.ocrdb2     2017/09/03 06:32:03     /u01/app/11.2.0/grid/cdata/oradb-cluster/backup02.ocrdb2     2017/09/02 02:32:00     /u01/app/11.2.0/grid/ CDATA/ORADB-CLUSTER/DAY.OCRDB2     2017/08/26 06:32:31     /U01/APP/11.2.0/GRID/CDATA/ORADB-CLUSTER/WEEK.OCR

(4) Modify the backup path

# ocrconfig-backuploc file_name (Specify Backup path)

  

9. Restore OCR
If not placed in ASM:
(1) View list nodes

# Olsnodes

(2) Close Clusterware

# Crsctl Stop CRS If you can't shut down, you can force shutdown # Crsctl Stop crs-f

(3) (if it is placed in a cluster file or a network file system) to restore OCR

# Ocrconfig-restore file_name

(4) Start Clusterware

# Crsctl Start CRS

  

In the case of ASM, complete the following steps:
(1) View list nodes

# Olsnodes

(2) Close Clusterware

# Crsctl Stop CRS If you can't shut down, you can force shutdown # Crsctl Stop crs-f

(3) Start Clusterware
Execute only on one node, start in exclusive mode

# Crsctl Start Crs-excl-nocrs ignores error messages generated during startup.

(4) Check whether the CRSD process is running

$ CRSCLT Stat Res ora.crsd-init if run, to close off CRSD process # Crsctl stop resource Ora.crsd-init

(5) Mount ASM Disk Group to Local. If the local cannot mount, first drop the disk group in ASM.

sql> drop DiskGroup disk_group_name Force including contents;

(6) Restore OCR

# Ocrconfig-restore file_name

(7) Re-check OCR

# Ocrcheck

(8) Close Clusterware

# Crsctl Stop Crs-f

(9) Remaining node repair OCR

Use the Ocrconfig-repair-replace command to execute each

(10) Start Clusterware

# Crsctl Start CRS

(11) Verification

$ cluvfy Comp ocr-n all-verboseverifying OCR Integrity ... Passedverification of OCR Integrity was successful. CVU operation performed:      OCR integritydate:                         Sep 3, 3:41:01 PMCVU home:                     /u01/app/12.2.0/grid/user:                         Grid

  

OCR problem Diagnosis

Diagnostic tools are ocrdump, Ocrcheck
In addition to the OCR files that are automatically backed up, the OCR content can be export and import, but it needs to be closed to achieve consistent results.

The OCR file format used by the Ocrconfig-restore and Ocrconfig-manualbackup commands is compatible, and the OCR file format used by the Ocrconfig-export and Ocrconfig-import commands is compatible. But the two are incompatible.

Import OCR (Linux platform)

(1) List all cluster nodes

$ olsnodes

(2) Stop Clusterware

# CRSCTL Stop CRS You can force close the # crsctl stop crs-f If it does not shut down properly

(3) In one of the nodes, start Clusterware in exclusive mode

# crsctl Start Crs-excl ignore the error message during startup to check if the CRSD process is running, if run to close it # Crsctl stop Resource Ora.crsd-init

(4) Import OCR

# ocrconfig-import file_name If you are importing a clustered file system or a network file system, go directly to step 7

(5) Verifying the integrity of the OCR

# Ocrcheck

(6) Close Clusterware

# Crsctl Stop Crs-f

(7) Start Clusterware again

# Crsctl Start CRS

(8) Verify OCR integrity on all nodes in the cluster

$ cluvfy Comp Ocr-n all-version

  

Oracle Local Registry (OLR)
You can use Ocrcheck, Ocrdump, ocrconfig plus parameter-local to manage OLR

1. Check the status of OLR # Ocrcheck-localstatus of Oracle Local Registry is as follows:         Version                  :          4 Total         space (Kbytes )     :     409568         used Space (Kbytes)      :       1060         Available Space (Kbytes):     408508         ID                       : 1941521711         device/file Name         :/u01/app/12.2.0/grid/cdata/db12c1.olr                                    device/file Integrity check Succeeded         Local Registry Integrity Check succeeded         Logical corruption check succeeded2.dump out OLR content # Ocrdump- Local3. Import the contents of the exported OLR # Ocrconfig–local–export file_name# ocrconfig–local–import file_name4. Manual Backup olr# Ocrconfig–local–man Ualbackup5. Restore olr# crsctl stop crs# ocrconfig-local-restore file_name# ocrcheck-local# crsctl start crs$ cluvfy Comp OLR

  

Managing Oracle Cluster Registry (OCR)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.