Dbms_stats Import and Export Schema-level statistics

Source: Internet
Author: User

In Oracle databases using the CBO optimizer mode, statistical information is an important basis for CBO to generate the best execution plan. These statistics usually include column-level, table-level, index, and system-level statistics. All such statistics can be backed up, and the import/export operations can also be locked and unlocked. Therefore, we can export statistics at the column level, table level, index level, and system level. By exporting import statistics, you can simulate the production environment in the test environment for database performance optimization and SQL optimization. This article mainly describes how to export import statistics to different databases based on the schema level.

The specific description and usage of statistical information are not described in detail in this article. You can refer to Oracle references.


1. Import and export statistics
A. The statistical information of the production environment can be imported into the test environment so that the execution plan can be generated to a great extent equivalent to the production environment.
B. You can freeze the execution plan through import and export, that is, control and compare the execution plan in different environments.
C. The statistical information can be backed up before re-analyze schema to prevent performance degradation after analyze
D. System-level statistics can be transplanted to small servers to simulate the running environment of Oracle on large servers.
E. System-level statistics can also be used to migrate to new servers to ensure consistent execution plans until new servers are used.
F. Because different workloads require different statistical information, appropriate statistical information can be provided before these loads are run (such as the OLTP during the day and the batch job mode at night)


2. Export and Import Schema-level statistics
A. Collect statistical information (source schema or system level included)
B. Create a table for storing statistics (such as stats_table)
C. Use dbms_stats.export_schema_stats to export schema statistics to the stats_table table.
D. Use DataPump expdp to export the stats_table table. (SQL * Plus copy can be used to avoid import and export)
E. FTP or SCP/CP dump file to the server where the target schema is located
F. Use DataPump impdp to import the DMP file to the target schema. to back up the data, back up the original statistical information before importing the data.
G. Use dbms_stats.import_system_stats to import statistics to the required schema.


3. Example of creating statistics
-- The following is an example of collecting statistics at the schema Scott level.

BEGIN   DBMS_STATS.gather_schema_stats (ownname            => 'SCOTT',                                   options            => 'GATHER AUTO',                                   estimate_percent   => DBMS_STATS.auto_sample_size,                                   method_opt         => 'for all columns size repeat',                                   degree             => 8);END;/


4. demonstrate how to export and import statistics to export different databases

-- Next we will import the Scott statistics from the database mmbo5 to Scott in the other database mmbo4. Collect the statistics Scott @ mmbo5> Delete from EMP where deptno = 20; scott @ mmbo5> commit; Scott @ mmbo5> exec dbms_stats.gather_schema_stats ('Scott ', cascade => true); Scott @ mmbo5> select table_name, num_rows, last_analyzed from user_tables; table_name num_rows last_analyzed -------------------------------- ------------------- dept 4 20130513 22: 19: 10emp 9 20130513 0 ---> five records are deleted from the EMP table, with 9 remaining bonus 0 20130513 22: 19: 10 salgrade 5 20130513 22: 19: 10tt 4 20130513 22: 19: 10b. Create a table for storing statistics. You can specify the tablespace in which the table is stored. If the table is stored by default, it is saved to the user's default tablespace Scott @ mmbo5> exec dbms_stats.create_stat_table ('Scott ', 'stats _ table'); PL/SQL procedure successfully completed. c. Export Scott's statistical information to stats_tablescott @ mmbo5> exec dbms_stats.export_schema_stats ('Scott ', 'stats _ table', 'Scott'); PL/SQL procedure successfully completed. Scott @ mmbo5> exitd, export statistics to dump file Oracle @ Dev-DB-04: ~> Expdp Scott/tiger directory = db_dump_dir dumpfile = ST. DMP logfile = ST. log tables = stats_tableexport: Release production on Monday, 13 May, 2013 22: 20: 53 estimate in progress using blocks method... processing object type table_export/table/table_datatotal estimation using blocks method: 64 kbprocessing object type table_export/table/tableprocessing object type table_export/table/index/ Index .. exported "Scott ". "maid" 17.82 kb 74 rowsmaster table "Scott ". "sys_export_table_01" successfully Loaded/unloaded ********************************* **************************************** * *** dump file set for Scott. sys_export_table_01 is:/u02/database/mmbo5/BNR/dump/St. dmpjob "Scott ". "sys_export_table_01" successfully completed at 22: 21: 11E, FTP statistics are sent to the target server and the dump file is imported. Because the following is on the same server, you can directly use CP command copy dump file to specific directory Oracle @ Dev-DB-04: ~> CP/u02/database/mmbo5/BNR/dump/st. DMP/u02/database/mmbo4/BNR/dumpf, use DataPump impdp to import the DMP file to the destination schemaoracle @ Dev-DB-04: ~> Export oracle_sid = mmbo4oracle @ Dev-DB-04 Impdp Scott/tiger directory = db_dump_dir dumpfile = ST. DMP logfile = imp_st.log tables = stats_tableimport: Release production on Monday, 13 May, 2013 22: 22: 57 processing object type table_export/table/tableprocessing object type table_export/table/table_data .. imported "Scott ". "maid" 17.82 kb 74 rowsprocessing object type table_export/table/index/indexjob "Scott ". "sys_import_table_01" successfully completed at 22: 23: 03G. Import statistical information to the schema of the target database. Check the Scott statistics of the database before importing. Scott @ mmbo4> select table_name, num_rows, last_analyzed from user_tables; table_name num_rows last_analyzed tables ---------- analyze stats_tabledept 4 20130513 22: 16: 05emp 14 20130513 22: 16: 05 bonus 0 20130513 22: 16: 05 salgrade 5 20130513 22: 16: 05 t 49991 20130513 rows selected. -- import statistics Scott @ mmbo4> exec dbms_stats.import_schema_stats ('Scott ', 'stats _ table', 'Scott'); PL/SQL procedure successfully completed. -- you can obtain the statistical information log alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24: MI: ss' in the target database through the query view dba_optstat_operations '; col start_time format a14col end_time format a14col operation format a30col target format a30set linesize 120 select operation, target, start_time, end_time from dba_optstat_operations where operation like 'import % '; operation target start_time end_time hour ------------------------------ ---------------------- import_schema_stats Scott 2013-05-13 22: 2013-05-13 22: 24:23 24:23 -- Author: Robinson -- Blog: http://blog.csdn.net/robinson_0612 -- The following query can also see that last_analyzed under Scott has been updated to the same statistics as the source server Scott @ mmbo4> select table_name, num_rows, last_analyzed from user_tables; table_name num_rows last_analyzed tables ---------- --------------- stats_tabledept 4 20130513 22: 19: 10emp 9 20130513 22: 19: 10 bonus 0 20130513 22: 19: 10 salgrade 5 20130513 22: 19: 10 t 49991 20130513 rows selected. scott @ mmbo4> select count (*) from EMP; count (*) ---------- 14 -- in fact, the EMP data in the mmbo4 table is still 14, but the statistics are 9, is the statistical information from the source database-for objects that exist in the source database schema but do not exist in the target database schema, such as the TT table on mmbo5, it will not be imported on mmbo4 -- objects that do not exist in the source database schema but exist in the target database schema, such as table t on mmbo4, the statistics are not updated. H. You can determine whether to clear the table Scott @ mmbo4> exec dbms_stats.drop_stat_table ('Scott ', 'stats _ table') that stores the statistics '); PL/SQL procedure successfully completed. -- the system-level statistical information import and export is not demonstrated here. Note that the corresponding import and export procedure in dbms_stats should be used. -- dbms_stats.gather_system_stats -- dbms_stats.import_system_stats -- dbms_stats.export_system_stats


More references

For more information about Oracle RAC, see
Use crs_setperm to modify the resource owner and permissions of RAC.
Use crs_profile to manage RAC resource configuration files
RAC database startup and Shutdown
Oracle RAC services
Services in Oracle Database 10g
Migrate datbase from single instance to Oracle RAC
Connect Oracle RAC to a specified instance
Oracle RAC load balancing test (combined with server and client)
Oracle RAC server connection Load Balance)
Load Balance)
Non-Default port listening configuration in Oracle RAC (listener. ora tnsnames. ora)
Oracle RAC Listener Configuration (listener. ora tnsnames. ora)
Configure RAC load balancing and Failover
CRS-1006, CRS-0215 fault case
Installing Oracle 10g RAC Based on Linux (RHEL 5.5)
Use runcluvfy to verify the Oracle RAC installation environment

For more information about the basics and concepts of Oracle network configuration, see:
Configure dynamic service registration for non-default ports
Configure sqlnet. ora to restrict IP Access to Oracle
Configure and manage Oracle listener logs
Set the Oracle listener password (listener)
Configure the Oracle client to connect to the database

For more information about user-managed backup and recovery, see
Oracle cold backup
Oracle Hot Backup
Concept of Oracle backup recovery
Oracle instance recovery
Oracle recovery based on user management
System tablespace management and Backup Recovery
Sysaux tablespace management and recovery
Oracle backup control file recovery (unsing backup controlfile)

For information on RMAN backup recovery and management, see
RMAN overview and architecture
RMAN configuration, Monitoring and Management
Detailed description of RMAN backup
RMAN restoration and recovery
Create and use RMAN catalog
Create RMAN storage script based on catalog
Catalog-based RMAN backup and recovery
RMAN backup path confusion
Use RMAN for recovery from different machine backups (WIN platform)
Use RMAN to migrate a file system database to ASM
Linux RMAN backup shell script
Use RMAN to migrate the database to a different machine

For the Oracle architecture, see
Oracle tablespace and data files
Oracle Password File
Oracle parameter file
Oracle online redo log file)
Oracle Control File)
Oracle archiving logs
Oracle rollback and undo)
Oracle database instance startup and Shutdown Process
Automated Management of Oracle 10g SGA
Oracle instances and Oracle databases (Oracle Architecture)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.