At the Oracle OpenWorld Conference in San Francisco in 2015, Oracle released the beta version of database 12.2, although the beta version is only available to some users, But the conference has unveiled 12.2 of the most important new features, and cloud and Emme are beta users of Oracle and have begun testing the product. At the just-concluded "Oracle Technology Carnival" Conference, more detailed topic sharing revealed more content. In this article, I'll hit with you to count the new features of Oracle Database 12.2.
Implementation of Oracle Sharding
Simply put, Oracle's sharding technology is implemented through the expansion of partitioning (partioning) technology. Partitions of a previous table can exist in different table spaces and can now exist in different databases.
Different partitions exist in different databases, which isolates the data and sharding this.
Sharding How to implement data routing?
Since data is split, how do you implement data routing when you access it? In the sharding architecture, there is a "Shard directories" directory Library to manage the distribution of the sharding, and when the application accesses the data through the Sharding key, the connection pool gives the access path and quickly points to the Shard that needs to be accessed. If the app does not specify a partition key access, you need to assist in the decision by coordinating the library-coordinator DB.
So what is the connection pool mentioned here?
It may be very rare to note that a new product component, added in Oracle version 12.1, Gds-global Data Services, allows the GDS to build an access "connection pool" to provide proxy and routing services for back-end database access, as mentioned earlier in the Shard directories, which is configured in the GDS.
How do I create a sharding data table?
Before you create a Sharding object, you need to create a set of Tablespaces-tablespace set, which contains the tablespace definitions in different databases, that is, the table spaces previously created for different partitions are transferred to different databases.
How do I configure a connection pool?
The configuration of the connection pool, in fact in the GDS documentation, is described in detail in the sharded database deployment, where Shardcatalog was first created, a shard directory configuration database was created, and Gsm-global Service Managment, which is the global Services management configuration.
For a GDS configuration, the following diagram-at a glance:
If the role of GDS is not clear in the 12.1, now in 12.2, the important role of sharding is increasingly highlighted.
Oracle's multi-tenancy and IMO component updates
Multi-tenant options for the cloud, but also constantly towards the convenience of the cloud, automation. Multi-tenant support for more PDB coexistence in 12.2, increased from 252 to 4,096 in the previous release, support for hot Clone on convenience, refresh support, support for online tenent transfer. The hot clone of the PDB can make the database clone copy when the business load runs, and synchronize the change data in real-time, so that the data keeps on leveling, and then realizes the online switch, which greatly improves the Oberyun migration process. It is simplified for the user, and under the management of the OEM, all the work can be done almost automatically.
In-memory option has also been enhanced on the 12.2, and this feature on the ADG allows for further separation of Read and write, due to the ADG read-only attribute, the memory data on the repository can be different from the main library, such as the repository can store more extensive data in memory, real-time computing. Improvements in performance and ease of use are also commendable, and In-memory supports automatic data transfer to memory based on heat maps in 12.2, as well as the ability to dynamically purge cold data to free up memory space and simplify user management.
The warmth of the details of the improvement
In addition to these big improvements, Oracle is also growing in the details of Dataguard, as well as warm hearts .
DBCA Creation -in the repository host installation software to start monitoring, you can use DBCA to create a standby library, point to the main library to obtain files;
The creation of the former dataguard has been very simplified, the operation of Rman is also very streamlined, now DBCA is also used, more convenient, and thoughtful enough?
password file Maintenance -"headache" in the title, can be seen before more headache
Before everyone may have been the password file change pit, now all this automatic maintenance, enough warmth?
AWR supports remote snapshots -AWR supports capturing information from remote databases, including ADG
To know that in the previous ADG, the repository can only be statspack for performance analysis and diagnostics, now can support AWR, high enough?
connection hold -keeps the session connected during the Dataguard switchover
For ADG, finally, Oracle's own people also said "finally", to achieve the failover, switchover in the connection session to maintain, which greatly improved the user experience, enough to force it?
Automatic block Repair enhancement -ADG automatic block repair has been introduced since 11GR2, and is now very mature, and the type of repair is greatly increased;
12.2 Dataguard in parallel log application
To know before 12.2, DG's repository can only be used by one instance through the MRP process, and can now be done in multiple instances in parallel.
In a 8-node RAC environment, the application speed of the 3500mb+/sec can be achieved, which greatly improves the synchronization efficiency in the large data volume standby environment.
Multi-instance applications, which can be performed in parallel on all mounted or open instances, to specify a parallel recovery instance when executing recover, similar to the following command:
Recover managed Standby Database disconnect using instances 4;
We can compare the schema comparison of single-instance and multi-instance applications, in general mode, multi-instance repositories can have multiple remote File Server (RFS) processes for redo thread log receive, but only one instance is managed Recovery Process (MRP) Application recovery:
Of course, on a single instance, you can still start multiple MRP processes and perform parallel recoveries, as follows from the official documentation:
The managed recovery process (MRP) applies archived redo log files to the physical standby database, and automatically det Ermines the optimal number of parallel recovery processes at the time it starts. The number of parallel recovery slaves spawned is based on the number of CPUs available on the standby server.
In PayPal's shared "internals about DataGuard", there is a page about the MRP description on the RAC for reference (follow the public number back to "PayPal" to get this document):
In the Oracle 12.2 version, multi-instance Parallel MRP recovery is supported, and the following architecture diagram details this improvement. With a coordinator process to collaborate, multiple MRP can be redo Apply in parallel.
This change will greatly enhance the efficiency and usability of Dataguard:
Is this another warm-heart trait enhanced?
In RDBMS database, no matter in the big place of the publicity, or small thoughtful, people are more and more love this product. In the cloud era, let's refuel together .
zhuan:http://chuansong.me/n/2187336
Oracle Database 12.2 New features detailed