1. Enhancements to automated storage management (ASM)
1.1. Flex ASM
In a typical grid-architecture installation, each node has its own ASM instance running and acting as a storage container for the database on that node, with the risk of a single point of failure for this installation configuration. For example, if there is a problem or failure on the ASM instance on that node, the databases and instances running on that node will be affected. To avoid a single point of failure for ASM instances, ORACLE12C provides a flex ASM feature. Flex ASM is a different concept and architecture in general, where only a few ASM instances run on a set of servers in a cluster, but when an ASM instance on one node fails, the Oracle cluster software automatically launches ASM on the other different nodes to replace the failed ASM instance to ensure availability. In addition, this installation configuration also provides the load balancing capability of the ASM instances running on the nodes. Another benefit of Flex ASM is that it can be configured on a separate node.
When you select the Flex cluster as the installation option for the cluster, the Flext ASM configuration is automatically selected because the Flex cluster requires Flex ASM. You can also opt for a regular cluster to discard Flext ASM. When you decide to use Flext ASM, you must confirm that the identified network is available. You can choose to enable Flex ASM when you install the cluster, or you can use ASMCA to enable Flex ASM in a standard cluster environment.
The following command shows the current ASM mode:
$./asmcmd Showclustermode
$./srvctl Config ASM
Or connect to an ASM instance, and then query the Instance_type parameter. If the output value is Asmprox, then the Flex ASM is configured.
1.2. Added ASM storage limit
ASM Storage of ASM Disk group and disk size hard limit has been greatly increased. In 12CR1, ASM supported disk groups increased from 11GR2 63 to 511, and each ASM disk increased from 20PB to now 32PB.
1.3. Adjust the ASM rebalance operation
The new "EXPLAIN work for" statement in 12c measures the workload of an ASM rebalance operation and enters the results into v$asm_estimate dynamic view. With this dynamic view, you can adjust the "POWER LIMIT" clause to improve the balance operation. For example, if you want to measure the amount of work required to add a new ASM disk, you can use the following statement before you actually run the rebalance operation manually:
sql> EXPLAIN work for Alterdiskgroup dg_data ADD DISK data_005;
sql> SELECT est_work fromv$asm_estimate;
sql> EXPLAIN work SET statement_id= ' Add_disk ' for ALTER diskgroup dg_data AD DISK data_005;
sql> SELECT est_work fromv$asm_estimate WHERE statement_id = ' add_disk ';
You can adjust power limit based on the output of the dynamic view to improve the performance of the rebalance operation.
1.4. ASM Disk Scrubbing
The new ASM disk scrubbing operation for normal or high redundancy-level ASM disks can verify that the logical data crashes for all ASM disks in the ASM Disk group, and that the automatic repair logic crashes. If detected, use ASM to mirror the disk. Disk scrubbing can be performed on disk groups, determined disks or files, with very little impact. The following example illustrates a disk scrubbing scenario:
sql> ALTER diskgroup dg_data SCRUB POWERLOW:HIGH:AUTO:MAX;
sql> ALTER diskgroup dg_data SCRUB FILE ' +dg_data/mydb/datafile/filename.xxxx.xxxx ' REPAIR POWER AUTO;
1.5. Active Session History (ASH) for ASM
V$active_session_hisotry Dynamic View now also provides sampling of active sessions for ASM instances. However, the use of the diagnostic package requires a license.
2. Enhancements to the grid (gridinfrastructure) architecture
2.1. Flex clusters
Oracle 12c provides two types of cluster configurations for cluster installations: traditional standard clusters and flex clusters. In a traditional standard cluster, all nodes in a cluster are tightly integrated into each other, interacting through private networks, and accessing storage directly. On the other hand, the Flex cluster introduces two types of nodes, which are arranged according to the architecture of the hub node and the leaf node. Hub-type nodes are similar to traditional standard clusters, for example, they are interconnected via private networks and can read and write storage directly. The leaf node and the hub node are different. Instead of accessing the underlying storage directly, they access storage and data through the hub node.
You can configure up to 64 hub nodes, and you can configure many leaf nodes. In a flex cluster, you can configure the hub node without configuring the leaf nodes, but you cannot configure the leaf nodes without configuring the hub nodes. You can configure multiple leaf nodes for a hub node. In an Oracle Flex cluster, only the hub node can access the ocr/voting disk directly. This is a great feature that you can use when you plan a large-scale cluster environment. This configuration greatly reduces the interconnection conflict and provides a scalable space for traditional standard clusters.
There are two ways of deploying a flex cluster:
1) When configuring a new cluster;
2) Upgrading from a standard cluster to a flex cluster;
If you are configuring a new cluster, you need to select the type of cluster configuration in the third step, choose to configure a flex cluster option, and then, in the sixth step, you have to divide the nodes into hub nodes and leaf nodes, for each node, select the role: Hub or leaf, alternatively, you can select the virtual host name.
The following steps are required when converting from a standard cluster mode to a flex cluster mode:
1) Use the following command to get the current state of the cluster
$./CRSCTL Get cluster mode status
2) Run the following command as the root user
$./CRSCTL Set Cluster mode Flex
$./crsctl Stop CRS
$./crsctl Start crs–wait
3) Change the role of each node according to your design
$./CRSCTL Get Node role config
$./crsctl Set Node role hub|leaf
$./crsctl Stop CRS
$./crsctl Start crs–wait
Attention:
1) You cannot convert a flex cluster to a standard cluster mode.
2) changing the cluster node mode requires a stop/start cluster.
3) Make sure the GNS is configured as a fixed VIP.
2.2. Backing up OCR in the ASM disk group
In 12c, OCR can now be backed up in ASM disk sets. This simplifies access to the OCR backup files through each node. When you restore OCR, you don't have to worry about the last backup of OCR on that node, just identify the latest backup in ASM, and it's easy to complete the recovery. The following example shows how to set the ASM Disk group to an OCR backup location:
$./ocrconfig-backuploc +DG_OCR
2.3. IPV6 Support
In Oracle 12c, Oracel now supports IPV4 and IPV6 network protocol configurations for the same network. You can now configure the public network (PUBLIC/VIP) Ipv4,,ipv6 or the combo protocol configuration. However, all nodes in the same cluster are sure to use the same set of IP protocol configuration methods.
3. RAC (database) Enhancements
3.1. What-if command Evaluation
Use the Srvctl command's new what-if command evaluation option to determine the effect of running the command. This new option for the SRVCTL command will allow you to simulate the command without actually executing and making changes to the current system. This option is especially useful when you want to make changes to the current system, but are not sure what the result is. Therefore, this option will provide the result of making the change. The-eval option can also be used with the CRSCTL command. For example, if you want to know what will happen if you stop a particular database, you can use the following example:
$./srvctl Stop Database–d Mydb–eval
$./crsctl Eval Modify resource <resource_name>-attr "value"
3.2. Srvctl improvement in all aspects
The SRVCTL command has some newly added options. The following is a description of the newly added database/instance resource options on the start and stop cluster.
Srvctlstart database|instance–startoption nomount| Mount| OPEN
Srvctlstop database|instance–stopoption nomount| Mount| OPEN
The asm& of new features of RAC function enhancement in oracle12c Grid