(I) Unreasonable full table scan for large tables
For details, click the open link.
The V $ session_longops view records all SQL statements that exceed 6 seconds.
Most of them are full table scan statements!
(Ii) Poor statement sharing
OLTP is often used. Because the app does not properly use the bound variable, a large number of duplicate statements (PARSE) are generated. A large number of shared pools are wasted, leading to high CPU utilization.
(Iii) Excessive sorting operations
There is a principle: unordered without sorting
In particular, multi-pass is related to transaction design, lack of indexing, and optimizer selection.
(4) A large number of recursive SQL statements
Run by sys to manage SQL statements in a large amount of space
Common in Big Data Processing
As a DBA, the storage space is allocated actively before big data processing.
(V) optimizer and statistics
Sometimes the code can run in the test environment, and the production environment will be "paralyzed ".
This is because the production environment does not collect statistics in a timely manner, and the Oracle optimizer does not know the latest data and application information, but incorrectly selects the non-optimized execution path.
Therefore, we need to collect statistics in a timely manner to ensure that the CBO-based optimizer can run happily.
(6) Unreasonable parameter settings
The system parameters must be adjusted and reasonably adjusted
Mainly memory parameters, process parameters, etc.
(Vii) unreasonable storage deployment
Inefficient I/O due to unreasonable storage deployment
Solution: ASM, raid10, etc.
㈧ Frequent database connection operations
The C/S structure is quite common and almost exclusive to B/S.
Invalid commit redo log Design
The redo log file is too small to trigger checkpoint events frequently, resulting in memory shortage and I/O busy.
If there are too few redo log file groups, archiving may fail to catch up with the speed produced by redo entries