1. Content Summary
1) Table partition Maintenance enhancements.
2) Improved database upgrade.
3) Restore/restore data files across the network.
4) Enhancement of data pump.
5) Real-time ADDM.
6) Concurrent statistical information collection.
2. Table Partition Maintenance Enhancements
In other articles, I explained how to move a table partition or sub-partition to a different table space online or offline. In this section, you will learn about other aspects related to table partitioning enhancements.
2.1. Add multiple new partitions.
12c R1, you can only add one partition at a time on a partitioned table. In order to add multiple partitions, you must execute an ALTER TABLE ADD PARTITION statement each additional new partition. 12c supports adding multiple new partitions with an ALTER TABLE add partition command. The following example shows how to add more than one new partition to an existing partitioned table:
Sql> CREATE TABLE Emp_part (Eno number (8), ename varchar2 (+), Salnumber (6))
PARTITION by RANGE (SAL)
(PARTITION p1 VALUES less THAN (10000),
PARTITION P2 VALUES less THAN (20000),
PARTITION P3 VALUES less THAN (30000)
);
Let's add a few new partitions below:
sql> ALTER TABLE Emp_part ADD PARTITION
PARTITION P4 VALUES less THAN (35000),
PARTITION P5 VALUES less THAN (40000);
Similarly, you can add multiple new partitions to the list and system partition tables, provided that there are no maxvalue partitions on these partitioned tables.
2.2. If you delete and truncate multiple partitions or sub-partitions
As part of data maintenance, you can typically delete or truncate partitions on partitioned tables. 12c R1, you can delete or truncate partitions of partitioned tables only one at a time. 12c, with an alter TABLE table_name {drop| The truncate}partitions command can delete or truncate multiple partitions or sub-partitions. The following example shows how to delete or truncate multiple partitions on a partitioned table:
sql> ALTER tableemp_part DROP partitions p4,p5;
sql> ALTER tableemp_part TRUNCATE partitons p4,p5;
In order to maintain the index at the same time, use the update INDEXES or update GLOBAL INDEXES clause,
As follows:
sql> altertable emp_part DROP partitions p4,p5 UPDATE GLOBAL INDEXES;
sql> altertable emp_part TRUNCATE partitions p4,p5 UPDATE GLOBAL INDEXES;
If you delete or truncate a partition without the update GLOBAL indexes clause, you can query the orphaned_entries of the user_indexes or user_ind_partitions dictionary view to see if the index contains stale index entries.
2.3. Splitting a partition into multiple new partitions
In 12c, the new strong splitpartition clause allows you to split a specific partition or sub-partition into multiple new partitions with a single command:
sql> CREATE TABLE Emp_part
(Eno number (8), ENAMEVARCHAR2 (+), Sal number (6))
PARTITION by RANGE (SAL)
(PARTITION p1 VALUES less THAN (10000),
PARTITION P2 VALUES less THAN (20000),
PARTITION P_max VALUES LessThan (MAXVALUE)
);
sql> ALTER TABLE emp_part SPLIT PARTITION P_max into
(PARTITION P3 VALUES less THAN (25000),
PARTITION P4 VALUES less THAN (30000), PARTITION P_max);
2.4. Merging multiple partitions into one partition
You can combine multiple partitions into one partition with an ALTER Tbale merge partitions statement:
Sql> createtable Emp_part
(Eno number (8), ename varchar2 (+), Salnumber (6))
PARTITION by RANGE (SAL)
(PARTITION p1 VALUES less THAN (10000),
PARTITION P2 VALUES less THAN (20000),
PARTITION P3 VALUES less THAN (30000),
PARTITION P4 VALUES less THAN (40000),
PARTITION P5 VALUES less THAN (50000),
PARTITION P_max (MAXVALUE)
);
sql> altertable emp_part MERGE partitions p3,p4,p5 into PARTITION p_merge;
If these partitions contain a partition key value range, you can also use the following example command:
sql> altertable Emp_part MERGE partitions p3 to P5 into PARTITION p_merge;
3. Improved Database Upgrade
Whenever a new release is released, every DBA must immediately face an upgrade. This section will explain the improvement of the two newly introduced upgrades to 12c.
3.1. Pre-upgrade scripts
In 12c R1, a new, greatly improved pre-upgrade information script Preupgrd.sql, replaces the previous Utlu[121]s.sql script. In addition to pre-upgrade check validation, the script can also address the various issues that arise before and after the upgrade process in a fixup script. The resulting fixup scripts can be executed to solve different levels of problems. For example: Pre-upgrade and post-upgrade. When the database is manually upgraded, the script must be manually run before the actual database upgrade is started. However, when upgrading a database with the Dbua tool, the pre-upgrade script is automatically run as part of the upgrade process and prompts you to run the fixup script when any errors occur. The following example shows how to run the script:
sql> @ $ORACLE _12ghome/rdbms/admin/preupgrd.sql
The above script will produce a log file and a [Pre/post]upgrade_fixup.sql script. All of these files are in the $oracle_base/cfgtoollogs directory. Before you move on to a real upgrade, you should review the logs and run the recommended actions and scripts in the log to resolve any issues that exist.
Note: Make sure you copy both the Preupgrd.sql and Utluppkg.sql scripts from the 12c Oracle home directory home/rdbms/admin to the current database/rdbms/admin location.
3.2. Parallel update function
The database upgrade time is proportional to the number of components in the database configuration, not the size of the database. Previous versions, there are no direct or indirect options available to quickly complete the entire upgrade process in parallel.
The catctl.pl (parallel upgrade feature) in 12c R1 replaces the previous Catupgrd.sql script, and the script in 12c R1 has an option to upgrade in parallel, reducing the time required to complete the upgrade process. The following procedure shows how to start the parallel (3 process) upgrade feature, start the database in upgrade mode, and then run the following command:
CD $ORACLE _12_home/perl/bin
$./perl Catctl.pl–n 3-catupgrd.sql
When a database is manually upgraded, the above two steps need to be explicitly run. However, the Dbua tool contains the above two changes.
4. Restore/restore data files over the network
Another big enhancement in 12c R1 is that you can now restore or recover data files, control files, spfile, table spaces, or entire libraries through the service name in the Master repository. This is useful when synchronizing the master and standby libraries.
When a long delay (GAP) is found between the main repositories, you no longer need a complex roll-forward process to populate the delay between the master and standby libraries. Rman can obtain incremental backups over the network and apply these backups to the physical standby to restore the repository. As has been said, you can use the service name to directly copy the required data files from the standby library to the main library, for example: a data file on the main library or table space is missing, or do not restore the data files from the backup set.
The following procedure shows how to roll forward with new features to synchronize the master repository:
Physical Standby side:
./rman Target "Username/[email Protected]_db_tns as Sysbackup"
Rman> recoverdatabase from SERVICE primary_db_tns USING compressed BACKUPSET;
The example above uses the connection string primary_db_tns on the standby to perform an incremental backup, then transfers these incremental backups to the repository and applies the files to synchronize the repositories. However, you need to be sure that the connection string primary_db_tns to the main library is already configured on the standby side.
In the following example, I'll illustrate a scenario by getting and recovering a missing data file from the repository on the main library:
Main Library side:
./rman Target "Username/[email Protected]_db_tns as Sysbackup"
rman> restoredatafile ' +dg_diskgroup/dbanme/datafile/filename ' from SERVICE standby_db_tns;
5. Data Pump Enhancements
This section mainly introduces the enhancement of data pump. There are a number of useful enhancements, such as trying to convert to a table when exporting, closing logs when importing, and so on.
5.1. Turn off log generation
Data pump import is supported by the newly introduced transform option that does not produce object redo during the import process. When you determine that the value of the transform option is disable_archive_logging, the objects in the context of the entire import will not produce redo. When importing large tables, this feature will greatly reduce the system pressure and the resulting redo, thus slowing down the import process. This feature can be applied to tables and indexes, and the following example illustrates this feature:
$./IMPDP Directory=dpump dumpfile=abcd.dmp logfile=abcd.logtransform=disable_archive_logging:y
5.2. Convert the view to a table
This is another enhancement of the data pump. With the Views_as_tables option, you can export the view as a table. The following examples describe how to export a view to a table during export:
$./IMPDP Directory=dpump dumpfile=abcd.dmp logfile=abcd.logtransform=disable_archive_logging:y
6. ADDM Analysis in real time
Analyzing past and current database health through tools like Awr,ash and ADDM is part of every DBA's life. While each tool can measure the overall health and performance of a database at all levels, every tool can be used when the entire database is unresponsive or "hang".
If you have configured Oracle Enterprise Manager 12c Cloud control, you can diagnose serious performance problems when you encounter a database that is not responding or is stuck. This will provide you with a complete description of what is happening in the database, and may also provide you with a remedy to solve the problem.
The following steps explain how to use Oracle Enterprise Manager cloud control to analyze the database state:
1) Select the Emergencymonitoring option from the Performance menu on the Access database home page, which will display the first blocking session in the Hang Analysis table.
2) Select the REAL-TIMEADDM option from performance to perform a real-time ADDM analysis.
3) After collecting the performance data, click on the findings page to get an interactive summary of all discoveries.
7. Parallel collection of statistical information for multiple tables
In previous versions of Oracle, Oracle collects one table at a time whenever you run the Dbms_stats process to collect statistics at the table, index, schema, or database level. If the table is large, then it is recommended to increase the degree of parallelism. In 12c R1, you can now collect statistics for multiple tables, partitions, and sub-partitions at the same time. When you start using this feature, you must make the following settings at the database level to enable this feature:
Sql>alter SYSTEM SET resource_manager_plan= ' Default_main ';
Sql>alter SYSTEM SET job_queue_processes=4;
Sql>exec Dbms_stats. Set_global_prefs (' CONCURRENT ', ' all ');
Sql>exec Dbms_stats. Gather_schema_stats (' SCOTT ');
ORACLE12C feature enhancements for maintenance & upgrades & recovery & data Pumps