Oracle Performance Tuning (AWR)

Source: Internet
Author: User

I. AWR report

AWR generates report data by comparing the statistics collected by two snapshots (snapshot), and the resulting report contains multiple parts, much like the report generated by Statspack. However, when generating reports, AWR can choose to generate reports in TXT or HTML two formats, which, by contrast, are better for reading, while TXT is more adaptable (even on machines that cannot use a browser).

Statspack friends have also remembered that the generated report using the $oracle_home/rdbms/admin/spreport.sql script, to the awr this piece, the operation steps are basically the same, but the script to generate the report many choices, including:

    • awrrpt.sql : Generates a statistical report of the specified snapshot interval; ----generate an AWR report for the entire database
    • Awrrpti.sql: Generates a statistical report of the specified DB instance and specifies the snapshot interval; -----Generate an AWR report for an instance
    • Awrsqlrpt.sql: Generates the specified snapshot interval, specifying the statistical report of the SQL statement (which is actually specified as the sqlid of the statement) --Generates an AWR report of an SQL statement, runs the script Awrsqrpt.sql
    • Awrsqrpi.sql: Generates a statistical report of the specified SQL statement for the specified DB instance, specifying the snapshot interval;
    • Awrddrpt.sql: Specify two different time periods, generate a statistical comparison report of the two periods;
    • Awrddrpi.sql: Specify a DB instance and specify two different time periods to generate a statistical comparison report for both periods;

So how do you generate an AWR report?

1. Enable AWR

      By default, Oracle enables database statistics to collect this feature (that is, enable AWR). Whether the AWR is enabled is controlled by the initialization parameter statistics_level. The show PARAMETER command

can display the current value of Statistics_level:

     sql> show PARAMETER statistics_level  the

    sql statement executes as follows:
    name                                   type         VALUE
   -----------------------------------------------------------------------------
   statistics_level                      string      Typical

If the value of Statistics_level is typical or all, AWR is enabled, and if the value of Statistics_level is basic, AWR is disabled.

The behavior of AWR is affected by the parameter statistics_level. This parameter has a value of three:

*basic:awr statistics are computed and derived values are closed. Only a small amount of database statistics is collected.

*typical: Default value. Only part of the statistical collection. They represent the behavior of the typical monitoring Oracle database that is needed.

*all: All possible statistics are captured. And there are some information about the operating system. This level of capture should be used only in rare cases, such as when you want more SQL diagnostic information.

2. Run Script Awrrpt.sql
Sql> @/oracle/product/10.2.0/db_1/rdbms/admin/awrrpt.sql

3. Select the type of report

Enter the value of the Report_type: HTML

4. List the snapshots generated in the last two days (mostly get snapshot IDs)

1. Enter the value of the Num_days: 2

Snap
Instance DB Name snap Id snap Started level
------------ ------------ --------- ------------------ -----
TELEMT telemt 4123 2011 00:00 1
4124 April 2011 01:01 1
....
4160 2011 13:00 1
4161 2011 14:00 1

Specifies the ID of the starting snapshot and the ID of the end snapshot.

1. Specify the Begin and End Snapshot Ids
2. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. Enter the value of the Begin_snap:4123
4. Begin Snapshot Id specified:4123
5.
6. Enter the value of the End_snap:4161
7. End Snapshot Id specified:4161

The ID of the starting snapshot in this example is 4123, and the ID of the ending snapshot is 4161.

Specifies the name of the report.

Enter the value of the Report_name: myreport.html

Second, the AWR report daily analysis

(1) SQL ordered by Elapsed time

      records the top SQL that executes the sum of time (note that the sum of the execution time of the SQL is monitored in scope, not the single SQL execution time Elapsed times = CPU hours + Wait Time).
Elapsed Time (S): The total duration of the SQL statement execution, this sort is done by this field. Note that this time is not a single SQL run time, but a sum of the number of SQL executions that are monitored within the range. Unit time is seconds. Elapsed times = CPU Time + Wait duration
       cpu: The total length of CPU elapsed for SQL statement execution This time is less than or equal to elapsed time. Unit time is seconds; The total number of executions of the
       executions:sql statement within the monitoring scope;
       elap per Exec (s): The average time to execute SQL once. Unit time is seconds;
       % Total DB time: Elapsed time for SQL as a percentage of the overall database duration.
       sql The ID number of the ID:SQL statement, click to navigate to the SQL detail list below, click on the return of IE can return to the current SQL ID place.
       sql Module: Shows how the SQL is connected to the database for execution, if Sql*plus or pl/ The SQL link comes up that's basically someone debugging the program. SQL that is typically executed with the foreground app link is empty.
       SQL Text: Simple SQL hint, detailed need to click on SQL ID.

(2) SQL ordered by CPU time

The top SQL that executes the longest total CPU time is recorded (note that the execution of the SQL in the monitoring scope is the sum of the CPU time, not the single SQL execution time).

(3) SQL ordered by Gets

The top SQL that performed the total buffer gets (logical IO) was recorded (note that the execution of the SQL in the monitoring range is the sum of the total, rather than the total of a single SQL execution).

(4) SQL ordered by Reads

Records the top SQL that performs physical read (physical IO) for the total disk (note that the SQL execution in the monitoring scope is the sum of the physical reads of the disk, not the physical read of the disk as a single SQL execution).

(5) SQL ordered by executions

The top SQL that is sorted by the number of executions of SQL is recorded. This sort shows the number of SQL executions that are within the scope of the monitoring.

(6) SQL ordered by Parse Calls

Top SQL that records the number of soft parse times for SQL. When it comes to soft-parsing (soft prase) and hard-parsing prase, it's impossible to say how Oracle handles SQL.

(7) SQL ordered by sharable Memory

The top SQL that SQL occupies the size of the library cache is logged. Sharable Mem (b): Takes up the size of the library cache, in bytes.

(8) SQL ordered by Version Count

Top SQL that records open child cursors for SQL.

(9) SQL ordered by Cluster Wait time

Top SQL that records the waiting time for a cluster

Third, snapshot (SNAPSHOT)

Every hour, the memory monitoring process (Mmon) automatically collects statistics and stores the information in the load library, where a single sample is a snapshot. To save space, the collected data is automatically cleared after 7 days. The frequency and retention time of the snapshot can be modified by the user.
--View the frequency and retention time of the snapshot (the default is to sample every 1 hours and the sampling information is retained for 7 days)
select * from Dba_hist_wr_control;
Select DBID, Snap_interval, Snapint_num, RETENTION from Wrm$_wr_control;
--Change the frequency and retention time of the snapshot (in minutes)
exec dbms_workload_repository.modify_snapshot_settings (interval=>60, retention=>7*24*60);


    BEGIN  
    dbms_workload_repository. Create_snapshot ();  
    END;

manually delete a snapshot of a specified range
  begin
  Dbms_workload_repository.drop_snapshot_range (low_snap_id = 3965, high_snap_id = 3966, dbid = 3437504306);
  end;
    
 --See how many snapshots
  Select COUNT (1) from Wrh$_active_session_history;
 select count (1) from Dba_hist_active_sess_history;
 
select * from Dba_hist_snapshot;

Four, the sampling data storage location

These sampled data are stored in the Sysaux table space and are named in the format wrm$_* and wrh$_*. The previous type stores metadata information, such as checked databases and captured snapshots,
The latter type holds the actual collected statistics.
Select table_name from dba_tables where table_name like ' wrm$% ';

table_name
-----------------------
Wrm$_wr_control
Wrm$_snap_error
Wrm$_snapshot
Wrm$_database_instance
Wrm$_baseline

When the Sysaux table space is full, AWR automatically overwrites the old information and logs a message in the warning log:
Ora-1688:unable to extend table SYS. Wrh$_active_session_history partition wrh$_active_3533490838_1522 by Tablespace Sysaux

Select table_name from dba_tables where table_name like ' wrh$% ';

V. Setting the Baseline

Create baselines
A baseline (baseline) is a mechanism so that you can mark the snapshot information set at important times. A baseline is defined between a pair of snapshots, and the snapshot is identified by their snapshot sequence number. each
The baseline has only a couple of snapshots. A typical performance tuning practice starts with collecting baseline collections of metrics, making changes, and then capturing another set of baselines. You can compare these two collections to check
The effect of the changes made. In AWR, the same type of comparison can be performed on an existing captured snapshot.
Suppose a name called Apply_interest runs between 2:00 and 4:00, corresponding to snapshot ID 4150 to 4151. We can define a baseline named apply_interest_1 for these snapshots:
sql> exec dbms_workload_repository.create_baseline (4150, 4151, ' apply_interest_1 ');


This operation numbers the snapshot from 4150 to 4151 as part of the baseline specified above. To view an existing baseline:
sql> Select *from dba_hist_baseline;

DBID baseline_id baseline_name start_snap_id start_snap_time end_snap_id end_snap_time
---------- ----------- -------------------- ------------- --------------------------------  ----------- -------------- -----------------
3437504306 1 apply_interest_1 4150 July-March-11 03.00.47.627 am 4151 July-March-11 04.00.12. 567 a.m.

sql> Select *from wrm$_baseline;

DBID baseline_id baseline_name start_snap_id end_snap_id
---------- ----------- ------------------------------  -------------  -----------
3437504306 1 apply_interest_1 4150 4151


After some adjustment steps, we can create another baseline-assuming the name is apply_interest_2 (2 o'clock in the afternoon to 4 o'clock), and then compare metrics only for those snapshots that are related to these two baselines.
sql> exec dbms_workload_repository.create_baseline (4162, 4163, ' apply_interest_2 ');

Separating snapshots in just a few sets like this helps to study the effects of adjustment on sexual energy.

Delete a baseline
After analysis, use Drop_baseline () to delete the baseline, and the snapshot will be retained (or cascade deleted). Additionally, when the purge routine begins to delete the old snapshot, the snapshot associated with the baseline is not purged, allowing for further analysis.
If you want to delete a baseline:
sql> exec dbms_workload_repository.drop_baseline (baseline_name=> ' apply_interest_1 ', cascade=>false);

Sql> Select *from wrh$_active_session_history where snap_id in (4150,4151);

snap_id DBID instance_number sample_id sample_time session_id ...
4150 3437504306 1 14900840 July-March-11 02.55.02.038 AM 162 ...
4150 3437504306 1 14900200 July-March-11 02.44.21.942 am 165 ...
....
4151 3437504306 1 14901980 July-March-11 03.14.02.213 am 165 ...
4151 3437504306 1 14901790 July-March-11 03.10.52.183 am 165 ...
4151 3437504306 1 14901490 July-March-11 03.05.52.138 am 167 ...

--Cascade Delete (baseline and snapshot piece delete)
sql> exec dbms_workload_repository.drop_baseline (baseline_name=> ' apply_interest_2 ', cascade=>true);

Sql> Select *from wrh$_active_session_history where snap_id in (4162,4163);
Row not selected

Oracle Performance Tuning (AWR)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.