SQLPerformanceAnalyze is now widely used in Upgrade and migration scenarios. Of course, there are other scenarios that can be considered for use, such as (parameter modification, IO subsystem change), but mainly to help us detect the SQL statements with degraded performance after the upgrade, it is used to prevent the SQL Performance degradation caused by the upgrade from being unusable.
SQL Performance Analyze (SQL Performance Analyze) is now widely used in Upgrade and migration scenarios. Of course, there are other scenarios that can be considered for use, such as (parameter modification, I/O subsystem change), but mainly to help us detect the SQL statements whose performance degrades after the upgrade, it is used to prevent the SQL Performance degradation caused by the upgrade from being unusable.
SQL Performance Analyze (SQL Performance Analyze) is now widely used in Upgrade and migration scenarios. Of course, there are other scenarios that can be considered for use, such as (parameter modification, I/O subsystem change), but mainly to help us detect the SQL statements whose performance degrades after the upgrade, it is used to prevent the problem that cannot be used due to the degraded SQL performance after the upgrade. As shown in:
The implementation steps of the main function set of SPA are as follows:
- Capture the SQL load on the production system and generate the SQL Tuning Set;
- Create a transit table, import the SQL Tuning Set to the transit table, export the transit table, and transmit it to the test database;
- Import the intermediate table and decompress the data in the intermediate table to SQL Tuning Set;
- Create a SPA task, and generate a trail of 10 Gb in 11 GB;
- Execute the comparison task and generate the SPA report;
- Analyze SQL statements with degraded performance;
When using SPA, you must first read the document:Using Real Application Testing Functionality in Earlier Releases (Document ID 560977.1 ),Read Table 3: SQL Performance Analyzer Availability Information. This table tells us that we can confirm that the necessary patches need to be installed for the SPA from the source version to the target version.
1. Capture SQL load on the production system and generate SQL Tuning Set
This step is not very complex. I have introduced the collection process in one of my articles. In fact, there are many collection methods, mainly:
- Cursor cache
- Awr snapshots
- Awr baseline
- Another SQL set
- 10046 trace file (11g +)
We generally use cursor collection and AWR historical database collection methods. Cursor collection can help us collect more SQL statements to the maximum extent. To ensure that more SQL statements are collected, we need to capture them several times a day for a long time. What we do in a production environment is capture 4 times/day. The AWR history database can help us collect top SQL statements. The project in our production environment collects AWR data for one month. These two collections are basically a complete SQL list in the system.
[Note] literal SQL may exist during the collection process, which leads to a very large result set of our SQLSET, because the related table involves some CLOB fields. If the result set is too large, the conversion to the intermediate table is very slow. Converting to half also results in a ORA-01555 error because UNDO is not large enough. To solve this problem, we recommend that you implement filtering during the collection process. For details, refer to the document I wrote: remove duplicates in SPA cursor collection.
-------------- Create a spa user and authorize SQL> create User spa identified by spa default tablespace spa; user created. SQL> grant connect, resource to spa; Grant succeeded. SQL> grant administer SQL TUNING SET to spa; Grant succeeded. SQL> grant execute on dbms_sqltune to spa; Grant succeeded. SQL> grant select any dictionary to spa; Grant succeeded. ------------- create an SQL optimization set SQL> exec dbms_sqltune.create_sqlset ('SQL _ test'); PL/SQL procedure successfully completed. SQL> select name, OWNER, CREATED, STATEMENT_COUNT from dba_sqlset; name owner created STATEMENT_COUNT exceed ------------ executed SQL _test SPA 18-APR-14 0 -------------- executed SQLDECLARE mycur records collected from the cursor; begin open mycur for select value (P) from table (dbms_sqltune.select_cursor_cache ('parsing _ schema_name in (''oraadmin'') ', NULL, 1, NULL, 'all') p; dbms_sqltune.load_sqlset (sqlset_name => 'SQL _ test', populate_cursor => mycur, load_option => 'merge'); CLOSE mycur; END ;/
For more information about collection, see:How to Load Queries into a SQL Tuning Set (STS) (Document ID 1271343.1)
2. Create a transit table, import the SQL Tuning Set to the transit table, export the transit table, and transmit it to the test database;
This step is relatively simple, but it should be noted that if you have a large number of cursors, You need to note that the conversion process is prone to errors in the ORA-01555. We recommend that you set the undo retention to a greater value.
------------- Do not use sys to create the stgtab table DBMS_SQLTUNE.create_stgtab_sqlset (table_name => 'sqlset _ tab', schema_name => 'pa', tablespace_name => 'sysaux '); END; /------------- package the optimization set into the stgtab table (sqlset_name => 'spa _ test', sqlset_owner => 'pa', staging_table_name => 'sqlset _ tab ', staging_schema_owner => 'spa'); END ;/
After converting to a transit table, we can perform the deduplication operation again. Of course, you can also delete unnecessary cursors Based on the module.
delete from SPA.SQLSET_TAB a where rowid !=(select max(rowid) from SQLSET_TAB b where a.FORCE_MATCHING_SIGNATURE=b.FORCE_MATCHING_SIGNATURE and a.FORCE_MATCHING_SIGNATURE<>0);delete from SPA.SQLSET_TAB where MODULE='PL/SQL Developer';
3. Import the intermediate table and decompress the data in the intermediate table to the SQL Tuning Set;
In this step, we need to migrate the data of the exported transit table to the test platform, then import the data, and convert it to the SQL Tuning Set of 11 GB again;
------------- Import data to the test system export NLS_LANG = spa/spa fromuser = spa touser = spa file =/home/oracle/spa/SQLSET_TAB.dmp feedback = 100 ------------- create sqlsetSQL> connect spa/spaConnected. SQL> exec DBMS_SQLTUNE.create_sqlset (sqlset_name => 'SQL _ test'); PL/SQL procedure successfully completed. ------------- unpack to sqlsetSQL> BEGIN 2 DBMS_SQLTUNE.unpack_stgtab_sqlset (sqlset_name => 'SQL _ test', 3 sqlset_owner => 'pa', 4 replace => TRUE, 5 staging_table_name => 'sqlset _ tab', 6 staging_schema_owner => 'spa'); 7 END; 8/PL/SQL procedure successfully completed.
If the name or owner of the SQL set on your source and target is different, you need to use the remap_stgtab_sqlset method to convert the name and owner of the SQL SET.
exec dbms_sqltune.remap_stgtab_sqlset(old_sqlset_name =>'sql_test_aaa',old_sqlset_owner => 'aaa', new_sqlset_name => 'sql_test',new_sqlset_owner => 'SPA', staging_table_name => 'SQLSET_TAB',staging_schema_owner => 'SPA');
For more information about how to import and export SQLSET, see:How to Move an SQL Tuning Set from One Database to Another (Document ID 751068.1)
4. Create a SPA task, and generate a trail of 10 Gb in 11 GB;
In this step, be sure to check whether there is any dblink in the test database. If so, delete it to avoid unnecessary operations by connecting to other databases, then, it may be slow to generate a trail of 11g in 11g. It is best to write a script for execution in the background.
------------- Create a SPA task var tname varchar2 (30); var sname varchar2 (30); exec: sname: = 'SQL _ test'; exec: tname: = 'spa _ test '; exec: tname: = DBMS_SQLPA.CREATE_ANALYSIS_TASK (sqlset_name =>: sname, task_name =>: tname); ----------- generate a 10g worker (task_name => 'spa _ test ', execution_type => 'convert sqlset', execution_name => 'convert _ 10g'); end;/------------- clear the shared pool and buffer cachealter system flush shared_pool; alter system flush BUFFER_CACHE; ------------- generate 11g trailbeginDBMS_SQLPA.EXECUTE_ANALYSIS_TASK (task_name => 'spa _ test', execution_type => 'test EXECUTE ', execution_name => 'exec _ 11g'); end ;/
5. Execute the comparison task and generate the SPA report;
We can compare these three dimensions, including execution time, CPU_TIME, and Buffer_GET.
------------- COMPARE beginDBMS_SQLPA.EXECUTE_ANALYSIS_TASK (task_name => 'spa _ test', execution_type => 'compare process', execution_name => 'compare _ elapsed_time' from elapsed_time ', execution_params => dbms_advisor.arglist ('execution _ name1', 'convert _ 10g', 'execution _ name2', 'exec _ 11g', 'comparison _ metric ', 'elapsed _ Time'); end;/------------- COMPARE beginDBMS_SQLPA.EXECUTE_ANALYSIS_TASK (task_name => 'spa _ test', execution_type => 'compare PERFORMANCE ', execution_name => 'compare _ CPU_time ', execution_params => dbms_advisor.arglist ('execution _ name1', 'convert _ 10g', 'execution _ name2', 'exec _ 11g ', 'compute _ metric ', 'cpu _ Time'); end;/------------- compare beginDBMS_SQLPA.EXECUTE_ANALYSIS_TASK (task_name => 'spa _ test ', execution_type => 'compare PERFORMANCE ', execution_name => 'compare _ BUFFER_GETS_time', execution_params => aggregate ('execution _ name1 ', 'convert _ 10g', 'execution _ name2 ', 'exec _ 11g', 'comparison _ metric ', 'buffer _ gets'); end; ------------- generate the SPA report set trimspool onset trim onset pages 0 set long 999999999 set linesize 1000 spool spa_report_elapsed_time.htmlSELECT partition ('spa _ test', 'html', 'all', 'all ', top_ SQL => 300, execution_name => 'compare _ elapsed_time ') FROM dual; spool off; spool spa_report_CPU_time.htmlSELECT dbms_sqlpa.report_analysis_task ('spa _ test', 'html', 'all ', 'all', top_ SQL => 300, execution_name => 'compare _ CPU_time ') FROM dual; spool off; spool spa_report_buffer_time.htmlSELECT dbms_sqlpa.report_analysis_task ('spa _ test', 'html ', 'all', 'all', top_ SQL => 300, execution_name => 'compare _ BUFFER_GETS_time ') FROM dual; spool off; spool spa_report_errors.htmlSELECT partition ('spa _ test ', 'html', 'errors', 'summary ') FROM dual; spool off; spool spa_report_unsupport.htmlSELECT dbms_sqlpa.report_analysis_task ('spa _ test', 'html', 'unororted', 'all ') FROM dual; spool off ;/
6. Analyze SQL statements with degraded performance;
After the report is generated, a total of five copies need to be analyzed one by one. We can see SQL statements with reduced performance in the ELAPSED_TIME, CPU_TIME, and Buffer_GET reports. Some SQL statements may increase the cpu time, some SQL statements may increase the buffer gets, and some SQL statements may increase in these three aspects. This is all we need to check. The performance of these SQL statements may degrade, the execution plan may change, and the Execution Plan may not change. Find out the reasons for the change in the execution plan, this requires us to have a very in-depth study on SQL optimizer, statistics, and so on.
Two other reports are errors and unsupport statements. We should check these statements. Generally, some reports may be invalid ROWID due to data differences. These don't need to be paid too much attention, because not all statements can be accurately analyzed, and some insert statements are unsupport. We only need to analyze the problems of most statements.
Reference:
How to Load Queries into a SQL Tuning Set (STS) (Document ID 1271343.1)
How to Move an SQL Tuning Set from One Database to Another (Document ID 751068.1)
Oracle? Database Real Application Testing User's Guide 11GRelease 2 (11.2)
Original article address: 11g SPA (SQL Performance Analyze) for upgrade test. Thank you for sharing it with me.