1. Enhancements to automated storage management (ASM)
1.1. Flex ASM
in a typical grid-architecture installation, each node has its own ASM instance running and acting as a storage container for the database on that node, with the risk of a single point of failure for this installation configuration. For example, if there is a problem or failure on the ASM instance on that node, the databases and instances running on that node will be affected. To avoid a single point of failure for ASM instances, ORACLE12C provides a flex ASM feature. Flex ASM is a different concept and architecture in general, where only a few ASM instances run on a set of servers in a cluster, but when an ASM instance on one node fails, the Oracle cluster software automatically launches ASM on the other different nodes to replace the failed ASM instance to ensure availability. In addition, this installation configuration also provides the load balancing capability of the ASM instances running on the nodes. Another benefit of Flex ASM is that it can be configured on a separate node.
When you select the Flex Cluster as the installation option for the cluster, the Flext ASM configuration is automatically selected because the Flex Cluster requires Flex ASM. You can also opt for a regular cluster to discard Flext ASM. When you decide to use Flext ASM, you must confirm that the identified network is available. You can choose to enable Flex ASM when you install the cluster, or you can use ASMCA to enable Flex ASM in a standard cluster environment.
The following command shows the current ASM mode:
$./asmcmd Showclustermode
$./srvctl Config ASM
or connect to an ASM instance, and then query the Instance_type parameter. If the output value is Asmprox, then the Flex ASM is configured.
1.2. increased ASM Storage limits
ASM The stored ASM Disk Group and disk size hard limits have been greatly increased. In 12CR1, ASM supported disk groups increased from 11GR2 63 to 511, and each ASM disk increased from 20PB to now 32PB.
1.3. adjusting ASM rebalance Operations
12c The new "EXPLAINwork for" statement measures the workload of an ASM rebalance operation and enters the results into a v$asm_estimate dynamic view. With this dynamic view, you can adjust the "POWER LIMIT" clause to improve the balance operation. For example, if you want to measure the amount of work required to add a new ASM disk, you can use the following statement before you actually run the rebalance operation manually:
sql> EXPLAIN work for Alterdiskgroup dg_data ADD DISK data_005;
sql> SELECT est_work fromv$asm_estimate;
sql> EXPLAIN work SET statement_id= ' Add_disk ' for ALTER diskgroup dg_data AD DISK data_005;
sql> SELECT est_work fromv$asm_estimate WHERE statement_id = ' add_disk ';
you can adjust based on the output of the dynamic view POWER Limit to improve the performance of the rebalance operation.
1.4. ASM Disk scrubbing
The new ASM disk scrubbing operation for normal or high redundancy-level ASM disks can verify that the logical data crashes for all ASM disks in the ASM Disk group, and that the automatic repair logic crashes. If detected, use ASM to mirror the disk. Disk scrubbing can be performed on disk groups, determined disks or files, with very little impact. The following example illustrates a disk scrubbing scenario:
sql> ALTER diskgroup dg_data SCRUB POWERLOW:HIGH:AUTO:MAX;
sql> ALTER diskgroup dg_data SCRUB FILE ' +dg_data/mydb/datafile/filename.xxxx.xxxx ' REPAIR POWER AUTO;
1.5. Active Session History (ASH) for ASM
V$active_session_hisotry Dynamic View now also provides sampling of active sessions for ASM instances. However, the use of the diagnostic package requires a license.
2. enhancements to the grid (gridinfrastructure) architecture
2.1. Flex clusters
Oracle 12c provides two types of cluster configurations for cluster installations: traditional standard clusters and flex clusters. In a traditional standard cluster, all nodes in a cluster are tightly integrated into each other, interacting through private networks, and accessing storage directly. On the other hand, the Flex cluster introduces two types of nodes, which are arranged according to the architecture of the hub node and the leaf node. Hub-type nodes are similar to traditional standard clusters, for example, they are interconnected via private networks and can read and write storage directly. The leaf node and the hub node are different. Instead of accessing the underlying storage directly, they access storage and data through the hub node.
You can configure up to a maximum of five hub nodes, and you can configure many leaf nodes. In a flex cluster, you can configure the hub node without configuring the leaf nodes, but you cannot configure the leaf nodes without configuring the hub nodes. You can configure multiple leaf nodes for a hub node. In an Oracle Flex cluster, only the hub node can access the ocr/voting disk directly. This is a great feature that you can use when you plan a large-scale cluster environment. This configuration greatly reduces the interconnection conflict and provides a scalable space for traditional standard clusters.
There are two ways of deploying a flex cluster:
1) When configuring a new cluster;
2) upgrading from a standard cluster to a flex cluster;
If you are configuring a new cluster, you need to select the type of cluster configuration in the third step, choose to configure a Flex cluster option, and then, in the sixth step, you have to divide the nodes into hub nodes and leaf nodes, for each node, select the role: Hub or leaf, alternatively, you can select the virtual host name.
The following steps are required when converting from a standard cluster mode to a flex cluster mode:
1) Use the following command to get the current state of the cluster
$./CRSCTL Get cluster mode status
2) run the following command as the root user
$./CRSCTL Set Cluster mode Flex
$./crsctl Stop CRS
$./crsctl Start crs–wait
3) Change the role of each node according to your design
$./CRSCTL Get Node role config
$./crsctl Set Node role hub|leaf
$./crsctl Stop CRS
$./crsctl Start crs–wait
Attention:
1) You cannot convert a flex cluster to a standard cluster mode.
2) changing the cluster node mode requires a stop/ start cluster.
3) make sure the GNS is configured as a fixed VIP.
2.2. backing up OCR in an ASM disk group
12c , OCR can now be backed up in the ASM Disk group. This simplifies access to the OCR backup files through each node. When you restore OCR, you don't have to worry about the last backup of OCR on that node, just identify the latest backup in ASM, and it's easy to complete the recovery. The following example shows how to set the ASM Disk group to an OCR backup location:
$./ocrconfig-backuploc +DG_OCR
2.3. IPv6 Support
Oracle 12c , Oracel now supports the IPV4 and IPV6 network protocol configuration for the same network. You can now configure the public network (PUBLIC/VIP) Ipv4,,ipv6 or the combo protocol configuration. However, all nodes in the same cluster are sure to use the same set of IP protocol configuration methods.
3. RAC(database) Enhancements
3.1. what-if Command Evaluation
Use the Srvctl Command's new what-if command evaluation option to determine the effect of running the command. This new option for the SRVCTL command will allow you to simulate the command without actually executing and making changes to the current system. This option is especially useful when you want to make changes to the current system, but are not sure what the result is. Therefore, this option will provide the result of making the change. The-eval option can also be used with the CRSCTL command. For example, if you want to know what will happen if you stop a particular database, you can use the following example:
$./srvctl Stop Database–d Mydb–eval
$./crsctl Eval Modify resource <resource_name>-attr "value"
3.2. Srvctl Improvement in all aspects
Srvctl the command has some newly added options. The following is a description of the newly added database/instance resource options on the start and stop cluster .
Srvctlstart database|instance–startoption nomount| Mount| OPEN
Srvctlstop database|instance–stopoption nomount| Mount| OPEN
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Three--asm&grid of new characteristics of oracle12c