New feature 3 of Oracle12c -- ASM & Grid
1. Enhancements to Automatic Storage Management (ASM)
1.1. Flex ASM
In a typical Grid Architecture Installation, each node has its own ASM instance running and plays the role of the database storage container on the node. This installation configuration has the risk of single point of failure. For example, if the ASM instance on the node fails, the databases and instances running on the node will be affected. Oracle12c provides a flex ASM feature to avoid single point of failure of the ASM instance. Flex ASM is a different concept and architecture in general. Among them, only a few ASM instances need to run on a group of servers in the cluster, but when the ASM instance on a node fails, the Oracle cluster software will automatically start ASM on different other nodes to replace the failed ASM instance to ensure availability. In addition, this installation configuration also provides the Load Balancing Capability of the ASM instance running on the node. Another benefit of Flex ASM is that it can be configured on a separate node.
When you select the Flex Cluster as the Cluster Installation option, the Flext ASM configuration is automatically selected because the Flex Cluster requires Flex ASM. You can also choose a regular cluster to discard Flext ASM. When you decide to use Flext ASM, you must confirm that the Network is available. You can enable Flex ASM when installing the cluster, or enable Flex ASM in the standard cluster environment.
The following command shows the current ASM mode:
$./Asmcmd showclustermode
$./Srvctl config asm
Or connect to the ASM instance and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, it indicates that Flex ASM is configured.
1.2. Added ASM storage restrictions
The hard limit on the size of the ASM disk and disk stored by ASM is greatly increased. In 12Cr1, the number of disk groups supported by ASM increases from 63 of 11gR2 to 511, and each ASM disk also increases from 20 PB to 32 Pb.
1.3. Adjust the ASM rebalancing operation
The new "explain work for" statement in 12c can measure the workload of an ASM rebalancing operation and input the result to the V $ ASM_ESTIMATE dynamic view. With this dynamic view, you can adjust the "power limit" clause to improve the balance operation. For example, if you want to measure the workload required to add a new ASM disk, you can use the following statement before manual and rebalancing:
SQL> EXPLAIN WORK FOR ALTERDISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROMV $ ASM_ESTIMATE;
SQL> EXPLAIN WORK SET STATEMENT_ID = 'add _ disk' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;
SQL> SELECT est_work FROMV $ ASM_ESTIMATE WHERE STATEMENT_ID = 'add _ disk ';
You can adjust the POWER limit based on the output of this dynamic view to Improve the Performance of rebalancing operations.
1.4. ASM Disk Scrubbing
The new ASM Disk scrubbing operation on common or highly redundant ASM Disk groups can verify that the logical data of all ASM disks in the ASM Disk group crashes and automatically repair logical crashes. If it is detected, use the ASM image disk. Disk scrubbing can be performed on disk groups, determined disks, or files, with minimal impact. The following example illustrates the disk scrubbing scenario:
SQL> ALTER DISKGROUP dg_data SCRUB POWERLOW: HIGH: AUTO: MAX;
SQL> ALTER DISKGROUP dg_data SCRUB FILE '+ DG_DATA/MYDB/DATAFILE/filename. xxxx. xx' REPAIR POWER AUTO;
1.5. Active Session History (ASH) for ASM
The V $ ACTIVE_SESSION_HISOTRY dynamic view also provides Active session sampling for the ASM instance. However, the use of diagnostic packages requires permission.
2. GridInfrastructure architecture Enhancement
2.1. Flex Cluster
During cluster installation, Oracle 12c provides two types of cluster configuration: traditional standard cluster and Flex cluster. In a traditional standard cluster, all nodes in the cluster are closely integrated with each other. They interact with each other through the private network and directly access the storage. On the other hand, Flex clusters introduce two types of nodes, which are arranged according to the architecture of Hub nodes and leaf nodes. Nodes of the Hub type are similar to traditional standard clusters. For example, they can be directly read and written to and stored through private network interconnection. Leaf nodes and Hub nodes are different. They do not have to directly access the underlying storage, but access the storage and data through the Hub node.
You can configure up to 64 Hub nodes and many leaf nodes. In a Flex cluster, you can configure the Hub node instead of the leaf node, but not the leaf node instead. You can configure multiple leaf nodes for one Hub node. In the Oracle Flex cluster, only the Hub node can directly access the OCR/Voting disk. When you plan a large-scale cluster environment, this will be a major feature that can be used. This configuration greatly reduces interconnection conflicts and provides scalable space for traditional standard clusters.
There are two ways to deploy a Flex cluster:
1) when a new cluster is configured;
2) upgrade from a standard cluster to a Flex cluster;
If you are configuring a new cluster, You need to select the cluster configuration type in step 3, select configure a Flex cluster option, and then, you must divide the nodes into Hub nodes and leaf nodes in Step 6. For each node, select the role Hub or leaf. In addition, you can also select the virtual host name.
To convert from a standard cluster mode to a Flex cluster mode, perform the following steps:
1) use the following command to obtain the current status of the cluster.
$./Crsctl get cluster mode status
2) run the following command as the root user:
$./Crsctl set cluster mode flex
$./Crsctl stop crs
$./Crsctl start crs-wait
3) change the role of each node according to your design.
$./Crsctl get node role config
$./Crsctl set node role hub | leaf
$./Crsctl stop crs
$./Crsctl start crs-wait
Note:
1) You cannot convert the Flex cluster to the standard cluster mode.
2) Changing the cluster node mode requires stopping/starting the cluster.
3) Make sure that GNS is configured as a fixed VIP.
2.2. Back up OCR In the ASM disk group
In 12c, OCR can now be backed up in the ASM disk group. This simplifies the access to the OCR backup file through each node. When restoring OCR, you don't have to worry about the last backup of OCR on that node. You just need to confirm the latest backup in ASM and can easily recover it. The following example shows how to set the ASM disk group to the OCR backup location:
$./Ocrconfig-backuploc + DG_OCR
2.3. IPv6 support
In Oracle 12c, El now supports IPv4 and IPv6 network protocol configurations for the same network. You can now configure Public network (Public/VIP) IPv4, IPv6, or composite protocols. However, all nodes in the same cluster are sure to use the same IP Protocol configuration method.
3. RAC (database) Enhancement
3.1. What-If command Evaluation
The new "What-if" option of the srvctl command can be used to evaluate the impact of running the command. This new option of the Srvctl command allows you to simulate the command without actual execution or changes to the current system. This option is especially useful when you want to make changes to the current system but are not sure about the results. Therefore, this option provides the result of making changes. -The eval option can also be used with the crsctl command. For example, if you want to know what will happen when a specific database is stopped, you can use the following example:
$./Srvctl stop database-d MYDB-eval
$./Crsctl eval modify resource -Attr "value"
3.2. Improvements in Srvctl
The Srvctl command has some new options. The following describes the newly added options for starting and stopping database/instance resources on the cluster.
Srvctlstart database | instance-startoption NOMOUNT | MOUNT | OPEN
Srvctlstop database | instance-stopoption NOMOUNT | MOUNT | OPEN