Oracle programming Art Study Notes (7)-Oracle files-Trace files

Source: Internet
Author: User
Tags dedicated server

 

If the execution efficiency of a system is relatively low, a better method is to track user sessions and use the tkprof tool to format the output using the sorting function to find out problematic SQL statements.

For example, first, use the top command on the OS to find the PID Number 9999 of the process that currently occupies the highest cpu resources;

Then find the corresponding sid and serial in the database based on the PID #

Select s. sid, s. serial # from v $ session s, v $ process p where s. paddr = p. addr and p. spid = '2016 ';

Then, use exec dbms_monitor.session_trace_enable (sid, serial #) to enable trace;

Finally, check the trace Output Using tkprof.

Enable Trace file output

You can enable Trace file output by using the following methods (the alter session system permission is required ):

1) alter session/system set SQL _trace = true

2) exec dbms_monitor.session_trace_enable/dbms_monitor.database_trace_enable

3) alter session set events '2017 trace name context forever, level 12'

 

Location of the Trace file

· If a dedicated server is used for connection, a trace file is generated in the directory specified by the USER_DUMP_DEST parameter.

· If the Shared Server is used for connection, a trace file is generated in the directory specified by the BACKGROUND_DUMP_DEST parameter.

Before Oracle11g, you can use the following statement to obtain the path of the Trace file:

Select c. value | '/' | d. instance_name | '_ ora _' | a. spid | '. trc' trace

From v $ process a, v $ session B, v $ parameter c, v $ instance d

Where a. addr = B. paddr

And B. audsid = userenv ('sessionid ')

And c. name = 'user _ dump_dest ';

After Oracle11g, you can access v $ diag_info to obtain the path for storing the Trace file:

Select * from v $ diag_info where name = 'default Trace file ';

 

You can use the TRACEFILE_IDENTIFIER parameter to add a unique identifier string to the trace file name. For example:

Alter session set tracefile_identifier = 'my _ trace_file ';

In this way, the generated Trace file name will end with my_trace_file.trc.

 

Analyze the Trace file using tkprof

You can use the tkprof tool to analyze the Trace file and generate a clearer and more reasonable output result. Tkprof can be found under $ ORACLE_HOME/bin.

 

1) Command Format

Command Format:

Tkprof tracefile outputfile [explain =] [table =] [print =] [insert =] [sys =] [sort =]

Parameter description:

Tracefile: trace file to be analyzed

Outputfile: formatted File

Explain = user/password @ connectstring

Table = schema. tablename

The preceding two parameters are used together. The explain command instructs tkprof to provide an execution plan for each SQL statement found in the trace file.

This is done by executing the SQL statement explain plan, connecting to the database to view the execution PLAN for each SQL statement that appears in the trace file, and outputting it to outputfile.

The specified table will be provided to the explain plan statement.

Print = n: only the first N SQL statements are listed. The default value is unlimited. It is meaningful only when used with the sort parameter.

Insert = filename: An SQL file is generated. Run this file to insert the collected data to the database table.

Sys = no: the SQL statements run by the sys user (for example, recursive query of the data dictionary in the parsing operation phase) are not output to the output file.

Record = filename: you can filter SQL statements that are not nested into a specified file.

Waits = yes | no: whether to count any waiting events. The default value is yes.

Aggregate = yes | no: whether to aggregate the execution information of the same SQL statement. The default value is yes.

Sort = option: Set sorting options. Multiple options can be separated by commas. By default, the SQL sequence found in the trace file is used. You can view the command output of tkprof for specific options.

 

For example:

Tkprof <tracefile> <outputfile> sys = no sort = prsela, exeela, fchela

Prsela elapsed time parsing

Exeela elapsed time executing

Fchela elapsed time fetching

 

2) output result format

In the output result, the first is the header content.

The following information is provided for each SQL statement: SQL statement text, execution statistics, resolution information, execution plan, and wait events.

The execution plan and wait events are optional and will only appear in the trace file.

For example, the following output:

**************************************** ****************************************

 

SQL ID: 0c07h414zr55p

Plan Hash: 1968341081

Update emp set sal = 2451

Where

Empno = 1, 7782

 

 

Call count cpu elapsed disk query current rows

-----------------------------------------------------------------------

Parse 2 0.01 0.00 0 0 0 0

Execute 2 0.00 3.71 0 3 7 2

Fetch 0 0.00 0.00 0 0 0 0

-----------------------------------------------------------------------

Total 4 0.01 3.72 0 3 7 2

 

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 86 (TONY)

 

Rows Row Source Operation

----------------------------------------------------------

0 update emp (cr = 1 pr = 0 pw = 0 time = 0 us)

1 index unique scan EMP_PK (cr = 1 pr = 0 pw = 0 time = 0 us cost = 0 size = 26 card = 1) (object id 73464)

 

 

Rows Execution Plan

----------------------------------------------------------

0 update statement mode: ALL_ROWS

0 update of 'emp'

1 index mode: ANALYZED (unique scan) OF 'emp_pk' (INDEX

(UNIQUE ))

 

**************************************** ****************************************

 

The execution statistics include the following columns:

Count: indicates the number of database calls executed.

Cpu: indicates the CPU time spent in processing data calls, in seconds.

Elapsed: the total time spent processing database calls, in seconds. If this value is higher than the CPU time, in the next section, waiting events in execution statistics will provide waiting resources or synchronization points.

Disk: the number of data blocks that are read physically. Be careful that this is not the number of physical I/O operations. The physical I/O operations are given in the wait event section. If the value is greater than the number of logical reads (disk> query + current), this means that the data block is filled in the temporary tablespace.

Query: The number of blocks read from the cache logic in the consistent mode. This type of logical reads is generally used for queries.

Current: number of blocks read from the cache logic in the current mode. Generally, such logical reads are used by statements such as INSERT, DELETE, MERGE, and UPDATE.

Rows: the number of data rows processed. For a query, this is the number of rows obtained. For INSERT, DELETE, MERGE, UPDATE, and other statements, this is the number of affected rows.

 

The parsed Information starts with two lines: Misses in library cache during parse and Misses in library cache during execute, which provide the number of hard resolutions that occur in the parsing and execution call phases.

If no hard parsing occurs during the execution of the call, the line Misses in library cache during execute does not exist.

Next, the optimizer mode and the users used to parse SQL statements.

 

The execution plan is divided into two parts. The first part is known as Row Source Operation. It is the execution plan written to the trace file when the cursor is closed and tracing is enabled. This means that if the application does not close the cursors and reuse them, there will be no new execution plans for the reusable cursors to be written to the trace file. The second part is called the Execution Plan, which is generated by TKPROF with the explicit parameter specified. Since this is subsequently generated, it does not necessarily match the first part. In case of inconsistency, the former is correct.

Both execution plans provide the number of Rows returned by each operation in the execution plan through the Rows column (not processed-note ).

For each row source operation, the following runtime statistics may also be provided:

Cr is the number of data blocks read logically in consistency mode.

Pr is the number of data blocks physically read from the disk.

Pw is the number of data blocks written to the disk physically.

Time is the total elapsed time in microseconds. Note that the calculated values are not always accurate. In fact, sampling may be used to reduce overhead.

Cost is the overhead of operation evaluation. This value is only available in Oracle 11g.

Size is the estimated data volume (in bytes) returned by the operation ). This value is only available in Oracle 11g.

Card is the estimated number of rows returned by the operation. This value is only available in Oracle 11g.

 

The end of the output file contains all information about the trace file. First, you can see the trace file name, version number, and the value of the sort parameter used for this analysis. Then, the number of all sessions and the number of SQL statements are given.

 

 

From: NowOrNever

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.