1. In a project, run the top SQL script in PL/SQL:
-- Usage Frequency
Select SQL _text, executions, sysdate
From (select SQL _text,
Executions,
Rank () over (Order
By executions DESC) exec_rank
From v $ SQL)
Where exec_rank <= 10;
Create a crontab and run the preceding script regularly. Find the SQL statement that is frequently used in the current time period, find the first one (that is, the most frequently executed statement), and copy it,
Paste it to a new window with the code:
Select grp_addr
From table_1
Where id =: B1
And grp_id = (select max (grp_id)
From table_1
Where id =: B1 );
Press F5, And the execution plan is:
Select statement, goal = all_rows cost = 4 cardinality = 1 bytes = 21
Table access by index rowid object owner = *** object name = *** cost = 2 cardinality = 1 bytes = 21
Index range scan object owner = *** object name = *** cost = 2 cardinality = 1
The cost is 4. Try to rewrite it with the analysis function:
Select first_value (grp_addr) over (partition by ID order by grp_id)
From table_1
Where a. ID =: B1;
Press F5, And the execution plan is:
Select statement, goal = all_rows cost = 3 cardinality = 1 bytes = 21
Window sort cost = 3 cardinality = 1 bytes = 21
Table access by index rowid object owner = *** object name = *** cost = 2 cardinality = 1 bytes = 21
It can be seen that cost is reduced to 3, and the execution time is reduced by 0.011 seconds. Don't look down on this optimization. This statement runs millions of times a day, and the result is the result of aggregation.
Yes. In addition to the most frequently executed SQL statements, we also need to optimize those SQL statements with poor performance. These statements can be found using top SQL scripts.
Solve the problem one by one.
2. Identify the problem by waiting for the event, run the following statement to find the system non-idle wait event:
Select Sid,
P1,
P1raw,
P2,
P2raw,
P3,
P3raw,
Wait_time,
Seconds_in_wait,
State,
Event,
Sysdate
From v $ session_wait
Where event not in
('Aq proxy cleanup wait', 'asm background timer ', 'diag idle wait ',
'Emon idle wait', 'ksv master wait', 'lns async archive log ',
'Lns async DEST activation', 'lns async end of log ',
'Logminer: client waiting for transaction ',
'Logminer: slave waiting for activate message ',
'Logminer: wakeup event for builder ',
'Logminer: wakeup event for preparer ',
'Logminer: wakeup event for reader', 'null event ',
'Px DEQ credit: Need buffer', 'px DEQ credit: Send blkd ',
'Px DEQ: Execute reply', 'px DEQ: Execution MSG ',
'Px DEQ: par recov execute ', 'px DEQ: Signal ack ',
'Px DEQ: Table Q normal', 'px DEQ: Table Q sample', 'px deque wait ',
'Px idle wait', 'queue monitor shutdown wait ',
'Queue monitor slave wait', 'queue monitor wait ',
'SQL * Net message from client',' SQL * Net message to client ',
'SQL * Net more data from client ',
'Streams apply coord waiting for slave message ',
'Streams apply slave idle wait ',
'Streams apply slave waiting for coord message ',
'Streams capture process filter callback wait for rule ',
'Streams fetch slave waiting for txns ',
'Streams waiting for subscribers to catch up ',
'Streams AQ: RAC qmn Coordinator idle wait ',
'Streams AQ: deallocate messages from streams' pool ',
'Streams AQ: delete acknowledged messages ',
'Streams AQ: qmn Coordinator idle wait ',
'Streams AQ: qmn slave idle wait ',
'Streams AQ: Waiting for messages in the queue ',
'Streams AQ: Waiting for time management or cleanup tasks ',
'Streams fetch slave: Waiting For txns ', 'class slave wait ',
'Client message', 'dispatcher timer', 'gcs for action ',
'Gcs remote message', 'ges remote message', 'I/O slave wait ',
'Jobq slave wait', 'knlqdeq', 'lock manager wait for remote message ',
'Master wait', 'null event', 'parallel query dequeue ', 'pipe get ',
'Pmon timer', 'queue messages ', 'rdbms IPC message', 'slave wait ',
'Smon timer', 'virtual circuit status', 'Wait for activate message ',
'Wait for unread message on broadcast channel ',
'Wakeup event for builder', 'wakeup event for preparer ',
'Wakeup event for reader', 'wakeup time manager ');
A large number of DB file sequential read events are found, which indicates that SQL may be optimized on hard disk Io. When crontab is created, the above script is run regularly to find a continuous wait (that is
I hate it here ~~~) Run the following code to obtain the corresponding SQL:
Select SQL _text
From v $ sqltext_with_newlines St, V $ session se
Where St. Address = Se. SQL _address
And st. hash_value = Se. SQL _hash_value
And se. Sid =: Sid
Order by piece;
Find its SQL:
Select * From table_2 where acct_nbr =: B;
(Here we should say that the session and wait event are both dynamic, and the SID is not "dynamic" from the other side when the corresponding operation finds the SID)
The statement is simple, and the judgment is missing index (in fact, we can draw the same conclusion in ADDM and statspack). The speed of the index is greatly improved. The customer suggested that the table has a certain amount
The DDL will be affected, and the developer is continuously monitored and asked about the fact that less than pieces of data are inserted into the table every day, and there is no real-time requirement. Therefore, it is recommended to imitate the copy on write policy in C language,
When the business is low, delete the index -- batch insert -- re-create the index.
3. Use ADDM. 10 Gb can be used at the beginning. It would have been much easier to use for Web OEMs, but the customer refused to start the corresponding listening (emctl start dbconsole. Life
.
A DB snapshot is generated. The parameter 'typical' means that a snapshot is generated at a typical collection level. You can also use the All parameter to add more OS information:
Begin
Dbms_workload_repository.create_snapshot ('typical ');
End;
/
Wait for a period of time (about 40 minutes, must be greater than 30 minutes, otherwise the interval is too short), run the above Code again to generate the second snapshot.
Run: Select * From dba_hist_snapshot a order by A. snap_id DESC; find the last two snapshots (that is, the two snapshots I generated) and record their snap_id
Field Value, run the following script:
Declare task_name varchar2 (30): = 'turning02 ';
Task_desc varchar2 (30): = 'turning02 ';
Task_id number;
Begin
Dbms_advisor.create_task ('addm', task_id, task_name, task_desc, null );
Dbms_advisor.set_task_parameter (task_name, 'start _ snapshot ', 5209 );
Dbms_advisor.set_task_parameter (task_name, 'end _ snapshot ', 5212 );
Dbms_advisor.set_task_parameter (task_name, 'instance', 1 );
Dbms_advisor.set_task_parameter (task_name, 'db _ id ',********);
Dbms_advisor.execute_task (task_name );
End;
/
* ****** Is the globally unique db_id ID of your database. record it in V $ database and view the generated report:
Select dbms_advisor.get_task_report ('turning02', 'text', 'all') from dual;
There will be a lot of English on the screen, copying it to ue or something. This report is actually an analysis of the real-time status in the dB during this time. What are the bad SQL statements and what indexes are missing,
Is there a physical hot block or something. Search for the 'index' keyword first (because it is always OEM, it is OK to press a button, and the command line does not know how to import it into a file ~~~ Shame), see
Check whether any index is recommended, and then check whether there are any prompts for some problematic SQL statements. In short, ADDM is very convenient. Not only can the problem be found, but the scripts to be solved will be written to you.
Okay. You can copy the SQL statement to check the execution time and plan.
To sum up the above, first master the 10/90 principle, that is, 90% of the performance problems are caused by 10%, while 90% of DB performance problems are caused by SQL statements, so start with SQL.
The three routes are actually "all the major routes pass Rome", and the Code for finding the problems is similar. You can use the top SQL script below to find them, or use top to view CPU usage in the operating system.
A high ora process, record its PID and then:
Select P. PID, P. spid, S. Sid, P. username, S. type,
S. Serial #, S. schemaname, S. osuser, S. machine,
S. process, P. Program, S. module, S. status,
S. Terminal, logon_time
From v $ PROCESS p, V $ session s
Where p. ADDR = S. paddr
And spid = '*****';
Check which session occupies a lot of CPU, and then make an SQL trace for this session to locate the problematic SQL, which is extremely simple.
The principle of SQL optimization is 1. Make statements run as few as possible. 2. Make statements consume less system resources. First, you need to understand the business process very well, just like the example I mentioned at the beginning,
A large amount of system resources are "dry run ". The second point can be solved through optimization of SQL statements. The above is just a general idea, and every conclusion is not achieved overnight. Every script should be scheduled,
Execute for a long time and insert the result into a table (or log). I add sysdate to the script to insert the table, and view it by time. The SQL statements are not one, but one by one.
Solve the problem, and several "paths" are mutually verified. For example, first wait for the event to output ADDM, and the most problematic SQL statement is basically the same, after the index is created in the second point
The problematic statement in ADDM disappears.
Database optimization and SQL optimization can basically solve most of the problems, and the dB side can be less dynamic (after all, you need to worry about the face of the customer's DBA ~~~), It mainly involves frequent DDL table creation.
If the index is difficult, you must consider modifying the business logic or table partitions to minimize the cost of access full.
For example, you often need to check the data in a certain period of time in a large table (for a period-over-period comparison report). The inserted Data in this table is frequent, and the index creation effect is obvious, while the range partition of the timeline can
Better balance these two problems. I also encountered a problem when partitioning. I want to put different partitions on different files, and different files on different I/O channels (a group of RAID
Calculate an I/O channel). In this way, we can distribute the physical I/O pressure to different I/O channels and hard disks as much as possible, so that the hard disks can be "active ", however, the customer's LVM layer shields the underlying physical implementation.
Finally, I spent some twists and turns to find out what the physical architecture of the system solves.
In the optimization work, I encountered an SQL statement that is faster than 30% through the "Predicate push" statement. That is to say, these optimization actions are to be carefully searched and solved one by one. Gradually accumulate final effect
The result is remarkable.