Looking for a big circle on the internet, not finding the right tools to track Oracle's SQL for some time.
Our scenario is intended to automate all scenarios (RFT) +fiddler tracking request + back-end tracking SQL, based on the results to analyze the slow request and SQL, originally AWR report can also be implemented, but each time with awr more trouble, think is able to simple point, Navigate directly to SQL that performs slow SQL or consumes CPU.
This tool captures the SQL is not real-time, is a certain period of SQL, the principle is very simple, more simple than the AWR sql, a test, and the AWR data is consistent.
The final effect: turn on tracing when preparing the action, close the trace after the end operation, and then the tool will provide the SQL and related metrics for this time, including disk reads, CPU, SQL execution time, number of resolutions, number of executions, etc., which can be saved to the database or exported to Excel.
View Introduction
Views used: V$sqlarea
This view holds the recently executed SQL and, if you want to achieve the awr effect, you can then query with related views such as Dba_hist_sqlstat.
Related fields used in this session:
Sql_fulltext: Full SQL statement, type is LOB field, you can use Dbms_lob to get content. SUBSTR (sql_fulltext,2000,1) to read, get the first 2000 characters of SQL;
Parsing_schema_name: The user executing the SQL;
MODULE: An application that executes SQL, such as IIS w3wp.exe,plsql is PL/SQL Developer;
Parse_calls: Number of resolutions, including soft parsing and hard parsing;
Executions:sql execution times;
Buffer_gets: Number of read memory;
Disk_reads: Read the number of disks (can be combined with read memory frequency to calculate the hit rate);
Cpu_time:cpu time, microseconds;
Elapsed_time:sql execution time, microseconds;
Last_active_time:sql the last activity time.
Since the data stored in the V$sqlarea is cumulative, the previous data needs to be subtracted from the current data.
Implementation principle
Before starting the operation, backup V$sqlarea, store to oldtable;
Get current system time, store to sysdate;
After the operation is finished, backup V$sqlarea and store to newtable. Now that the preparation is done, the next step is to integrate the data.
According to NewTable in last_active_time>sysdate get this executed SQL, and then the relevant metrics data is newtable minus oldtable (two table of the associated fields Hash_value and address, SQL_ID should also be available).
Note that the new SQL appears in NewTable, so use the left join and NVL to convert the null value. If you care about other metrics, you can add fields to your situation.
If you want to avoid interference from other factors, you can only display the SQL that meets the requirements, such as parsing_schema_name= login user or Module=w3wp.exe.
Core sql:
sql = "SELECT * from" (select N.parsing_schema_name as schema,n.module as module, Dbms_lob. SUBSTR (n.sql_fulltext,2000,1) as SQL_TEXT,N.PARSE_CALLS-NVL (o.parse_calls,0) as PARSE_CALLS,N.BUFFER_GETS-NVL ( o.buffer_gets,0) as BUFFER_GETS,N.DISK_READS-NVL (o.disk_reads,0) as DISK_READS,N.EXECUTIONS-NVL (o.executions,0) as Executions,round ((N.CPU_TIME-NVL (o.cpu_time,0))/1000000,2) as Cpu_time,round ((N.CPU_TIME-NVL (o.cpu_time,0))/(( N.EXECUTIONS-NVL (o.executions,0)) *1000000), 2) as Cpu_time_per_exe,round ((N.ELAPSED_TIME-NVL (o.elapsed_time,0))/( (N.EXECUTIONS-NVL (o.executions,0)) *1000000), 2) as Elapsed_time_per_exe "+
"From" +newtable+ "n" +
"Left JOIN" +oldtable+ "o o.hash_value = n.hash_value and o.address = n.address" +
"WHERE n.last_active_time > To_date ('" + sysdate + "', ' yyyy/mm/dd hh24:mi:ss ')" +
"and (N.EXECUTIONS-NVL (o.executions,0)) > 0" +
"ORDER by Elapsed_time_per_exe DESC" WHERE rownum<=20 ";
Final effect (C # Edition):
Oracle SQL statement Tracking and performance analysis tool implementation