ORACLE12C feature enhancements for maintenance & upgrades & recovery & data Pumps

Source: Internet
Author: User

1. Content Summary

1) table partition maintenance enhancements.

2) improved database upgrade.

3) Restore/ restore data files across the network.

4) enhancement of data pump.

5) real-time ADDM.

6) Concurrent statistical information collection.

2. Table Partition Maintenance Enhancements

In other articles, I explained how to move a table partition or sub-partition to a different table space online or offline.

In this section, you will learn about other aspects of the table partitioning enhancements.

2.1. Add multiple new partitions.

12c R1 previously, only one partition could be added at a time on a partitioned table. In order to add multiple partitions. You must run an ALTER TABLE add PARTITION statement for each new partition you add . 12c supports adding multiple new partitions with an ALTER TABLE add partition command. The following example shows how to add multiple new partitions to an existing partitioned table:

Sql> CREATE TABLE Emp_part (Eno number (8), ename varchar2 (+), Salnumber (6))


(PARTITION p1 VALUES less THAN (10000),




Let's add a few new partitions below:




same. You can add multiple new partitions to the list and system partition tables, provided that there are no maxvalue partitions on these partitioned tables .

2.2. Suppose to delete and truncate multiple partitions or sub-partitions

as part of data maintenance, you can typically delete or truncate partitions on partitioned tables. 12c R1 , you can delete or TRUNCATE partition table partitions only one at a time.

12c, with an alter TABLE table_name {drop| The truncate}partitions command can delete or truncate multiple partitions or sub-partitions.

The following example shows how to delete or truncate multiple partitions on a partitioned table:

sql> ALTER tableemp_part DROP partitions p4,p5;

sql> ALTER tableemp_part TRUNCATE partitons p4,p5;

to maintain the index at the same time, use the update INDEXES or update GLOBAL INDEXES clause.

For example, the following:

sql> altertable emp_part DROP partitions p4,p5 UPDATE GLOBAL INDEXES;

sql> altertable emp_part TRUNCATE partitions p4,p5 UPDATE GLOBAL INDEXES;

If you delete or truncate a partition without the update GLOBAL INDEXES clause, you can query the orphaned_entries of the user_indexes or user_ind_partitions dictionary view, To find out whether the index includes stale index entries.

2.3. splitting a partition into multiple new partitions

12c , the new strong splitpartition clause allows you to split a specific partition or sub-partition into multiple new partitions with a single command:

sql> CREATE TABLE Emp_part

(Eno number (8), ENAMEVARCHAR2 (+), Sal number (6))


(PARTITION p1 VALUES less THAN (10000),




sql> ALTER TABLE emp_part SPLIT PARTITION P_max into



2.4. merging multiple partitions into one partition

You can combine multiple partitions into one partition with an ALTER tbale merge partitions statement:

Sql> createtable Emp_part

(Eno number (8), ename varchar2 (+), Salnumber (6))


(PARTITION p1 VALUES less THAN (10000),







sql> altertable emp_part MERGE partitions p3,p4,p5 into PARTITION p_merge;

Suppose these partitions include the partition key value range. You can also use the following example command:

sql> altertable Emp_part MERGE partitions p3 to P5 into PARTITION p_merge;

3. Improved Database upgrade

whenever the new version number is announced, every DBA must immediately face the upgrade.

This section will explain the improvement of the two newly introduced upgrades to 12c.

3.1. Pre- upgrade Scripts

12c R1 the. A new, significantly improved pre-upgrade information script Preupgrd.sql. Replaced the previous Utlu[121]s.sql script. In addition to doing pre-upgrade check verification. The script can also address the various issues that arise before and after the upgrade process in a fixup script.

The resulting fixup scripts can be executed to solve different levels of problems. For example: Pre-upgrade and post-upgrade.

When you manually upgrade the database, start the actual database before you upgrade. The script must be executed manually.

However, when you upgrade a database with the Dbua tool, you are actively executing the pre-upgrade script as part of the upgrade process. You will be prompted to execute the fixup script when there are no errors. The following example shows how to execute the script:

sql> @ $ORACLE _12ghome/rdbms/admin/preupgrd.sql

the above script will produce a log file and a [pre/post]upgrade_fixup.sql script.

All of these files are under the $ORACLE _base/cfgtoollogs folder.

Before you go on a real upgrade, you should review the logs and perform the actions and scripts recommended in the log to resolve any issues that exist.

Note: Make sure you copy both the Preupgrd.sql and utluppkg.sql scripts from the 12c Oracle home folder Home/rdbms/admin to the current database/rdbms /admin position.

3.2. Parallel Update feature

The database upgrade time is proportional to the number of components in the database configuration, not the size of the database.

The previous version number. There are no direct or indirect options available for parallel high-speed completion of the entire upgrade process.

12c R1 (Parallel upgrade feature) replaces the previous Catupgrd.sql script, the script in 12c R1 has an option to upgrade in parallel, which shortens the time required to complete the upgrade process.

The following procedure shows how to start the parallel (3 process) upgrade feature, start the database in upgrade mode, and then execute the following command:

CD $ORACLE _12_home/perl/bin

$./perl–n 3-catupgrd.sql

When a database is manually upgraded, the above two steps need to be explicitly executed.

However. The Dbua tool includes the above two changes.

4. Restore/ restore data files over the network

12c R1 the. A very big enhancement is that you can nowcontrol files by restoring or recovering data files between master repositories through the service name. SPFile, table space, or entire library.

When synchronizing the Master repository. This is very practical.

when a very long delay (GAP) is found between the primary and standby libraries. You no longer need a complex roll-forward process to populate the delay between the master and standby libraries.

Rman can obtain incremental backups over the network and apply these backups to the physical standby to restore the repository.

Like it's been said. You can use the service name to directly copy the required data files from the standby to the Master library. For example, a data file or table space on the main library is missing. Or, you do not restore the data files from the backup set.

The following procedure shows how to run roll forward with new features to synchronize the master repository:

Physical Standby side:

./rman Target "Username/[email Protected]_db_tns as Sysbackup"

Rman> recoverdatabase from SERVICE primary_db_tns USING compressed BACKUPSET;

The example above uses the connection string on the Primary_db_tns to connect to the main library to run an incremental backup, then transfers these incremental backups to the repository and applies the files to synchronize the repositories.

But. You need to be sure that the connection string primary_db_tns to the main library is already configured on the standby side.

In the following example, I'll illustrate a scenario by getting and recovering a missing data file from the repository on the main library:

Main Library side:

./rman Target "Username/[email Protected]_db_tns as Sysbackup"

rman> restoredatafile ' +dg_diskgroup/dbanme/datafile/filename ' from SERVICE standby_db_tns;

5. Data Pump Enhancements

This section mainly introduces the enhancement of data pump. There are a lot of practical enhancements, such as: an attempt to convert to a table when exporting, a log off when importing, and so on.

5.1. Turn off log generation

Data pump Import is supported by the newly introduced Transform option that does not produce object redo during the import process. When you determine that the value of the transform option is disable_archive_logging, the objects in the context of the entire import will not produce redo. When importing large tables, this feature greatly reduces the system pressure and the resulting redo, slowing down the import process. This feature can be applied to tables and indexes. The following example illustrates this feature:

$./IMPDP Directory=dpump dumpfile=abcd.dmp logfile=abcd.logtransform=disable_archive_logging:y

5.2. Converting a view to a table

There is also an enhancement to the data pump. Take advantage of the views_as_tables option. You can export the view as a table. The following sample description describes how to export a view to a table during export:

$./IMPDP Directory=dpump dumpfile=abcd.dmp logfile=abcd.logtransform=disable_archive_logging:y

6. ADDM Analysis in real time

Analyzing past and present database health through tools like Awr,ash and ADDM is part of every DBA's life.

Although each tool can measure the overall health and performance of a database at all levels. But when the entire database is unresponsive or "hang". Every tool can be used.

When you encounter a database that doesn't respond or is stuck. Assuming you have configured Oracle Enterprise Manager 12c Cloud control, you can diagnose serious performance issues. This will give you a general description of what is happening in the database, and perhaps provide you with a remedy to solve the problem.

The following steps explain how to use Oracle Enterprise Manager cloud Control to analyze database status:

1) Select the Emergencymonitoring option from the Performance menu on the Access database home page . This will show the top blocking session in the Hang Analysis table.

2) Select the REAL-TIMEADDM option from performance to run the real-time ADDM analysis.

3) After collecting the performance data, click on the findings page to get an interactive summary of all discoveries.

7. Parallel collection of statistical information for multiple tables

in the previous Oracle version number, no matter when you perform the dbms_stats process to collect the tables. Index, schema, or database-level statistics. Oracle collects a single table at a time. Suppose the table is very large. It is recommended to add the degree of parallelism. In 12c R1, you can now collect statistics for multiple tables, partitions, and sub-partitions at the same time.

When you start using this feature, you must do so at the database level, such as the following settings to enable this feature:

Sql>alter SYSTEM SET resource_manager_plan= ' Default_main ';

Sql>alter SYSTEM SET job_queue_processes=4;

Sql>exec Dbms_stats. Set_global_prefs (' CONCURRENT ', ' all ');

Sql>exec Dbms_stats. Gather_schema_stats (' SCOTT ');

ORACLE12C feature enhancements for maintenance & upgrades & recovery & data Pumps

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.