Improve the hit rate and Related Optimization of Oracle databases (1)

Source: Internet
Author: User
Tags oracle documentation

Improve the hit rate and Related Optimization of Oracle databases (1)

Oracle DatabaseIt can be said that the database industry occupies a very important position and leads the development of the database. This article is about Oracle Database debugging andOptimizationThis article mainly introduces Oracle DatabaseHit rateRelated issues, including performance comparison between different algorithms.

Computing of various hit rates in Oracle and related Tuning

1) hit rate of library cache:

Calculation formula: Library cache hit ratio = sum (pinhits)/sum (PINs)

SQL> select sum (pinhits)/sum (PINs) from V $ librarycache;

Generally, it is more than 98%. Otherwise, you need to consider increasing the sharing pool, binding variables, and modifying cursor_sharing and other parameters.

2) Calculate the memory usage of the Shared Pool:

SQL> select (1-round (Bytes/(& tsp_in_m * 1024*1024), 2 )) * 100 | '%' from V $ sgastat where name = 'free memory 'and pool = 'shared pool ';

Where: & tsp_in_m is the size (M) of your total Shared Pool)

The memory usage of the shared pool should be stable between 75% and 90%, which is too small to waste memory. If it is too large, the memory is insufficient.

Query idle Shared Pool memory:

SQL> select * from V $ sgastat where name = 'free memory 'and pool = 'shared pool ';

3) dB buffer cache hit rate:

Calculation formula: hit ratio = 1-[physical reads/(block gets + consistent gets)]

SQL> select name, physical_reads, db_block_gets, consistent_gets, 1-(physical_reads/(db_block_gets + consistent_gets) "hit ratio" from V $ comment where name = 'default ';

Generally, it should be above 90%. Otherwise, you need to adjust it to increase db_cache_size.

Another method for calculating the hit rate (from the official Oracle documentation <database performance optimization> ):

The hit rate is calculated as follows:

Hit ratio = 1-(physical reads-physical reads direct (LOB )) /(db block gets + consistent gets-physical reads direct (LOB ))

The buffer cache hit rate is obtained by inserting the result values in the previous query.

SQL> select name, value from V $ sysstat where name in ('session logical reads', 'Physical reads', 'Physical reads' direct ', 'Physical reads direct (LOB )', 'db block gets', 'consistent gets ');

4) data buffer hit rate:

SQL> select value from V $ sysstat where name = 'physical reads'; SQL> select value from V $ sysstat where name = 'physical reads direct '; SQL> select value from V $ sysstat where name = 'physical reads direct (LOB) '; SQL> select value from V $ sysstat where name = 'consistent
Gets'; SQL> select value from V $ sysstat where name = 'db block gets ';

The hit rate is calculated as follows:

Make x = physical reads direct + Physical reads direct (LOB)

Hit rate = 100-(physical reads-x)/(Consistent gets + db block gets-x) * 100

If the hit rate is lower than 90%, you should adjust the application to determine whether to increase the data buffer.

5) hit rate of the Shared Pool:

SQL> select sum (pinhits-reloads)/sum (PINs) * 100 "Hit Radio" from V $ librarycache;

If the hit rate of the Shared Pool is lower than 95%, you should consider adjusting the application (usually not using bind var) or increasing the memory.



 

6) calculate the ratio of sorting in memory:

SQL> select * from V $ sysstat t where name = 'sorts (memory )'; -query memory sorting number SQL> select * from V $ sysstat t where name = 'sorts (Disk )'; -query the disk order quantity -- caculate sort in memory ratio SQL> select round (& sort_in_memory/(& sort_in_memory + & sort_in_disk), 4) * 100 | '%' from dual;

The larger the ratio, the better. If the ratio is too small, you need to consider adjusting and increasing the PGA.

7) PGA hit rate:

Calculation formula: BP x 100/(BP + EBP)

BP: bytes processed

EBP: extra bytes read/written

SQL> select * from V $ pgastat where name = 'cache hit percentage ';

Or you can view it in the OEM graphic interface.

We can view a view to obtain the recommended Oracle values:

SQL> select round (rows/1024/1024) target_mb, explain cache_hit_perc, estd_overalloc_count from V $ rows; the output of this query might look like the following: target_mb cache_hit_perc estd_overalloc_count
---------- -------------- ------------------ 63 23 367 125 24 30 250 30 3 375 39 0 500 58 0 600 59 0 700 800 60 0 900 60 0

In this example, the PGA must allocate at least 375 m

I personally think the PGA hit rate should not be lower than 50%.

The following SQL statements count the number of times SQL statements are executed in three modes: optimal memory size, one-pass memory size, and multi-pass memory size:

SQL> select name profile, CNT, decode (total, 0, 0, round (CNT * 100/total, 4) Percentage from (Select name, value CNT, (sum (value) over () Total from V $ sysstat where name like 'workarea exec % ');

8) shared area dictionary cache hit rate

Calculation formula: sum (gets-getmisses-usage-fixed)/sum (gets)

Hit rate should be greater than 0.85

SQL> select sum (gets-getmisses-usage-fixed)/sum (gets) from V $ rowcache;

9) data cache hit rate

Calculation formula: 1-(physical reads/(db block gets + consistent gets ))

The hit rate should be greater than 0.90, preferably

SQL> select name, value from V $ sysstat where name in ('physical reads', 'db block gets', 'consistent gets ');

10) hit rate of the cache area in the shared area

Calculation formula: sum (pins-reloads)/sum (PINs)

Hit rate should be greater than 0.99

SQL> select sum (pins-reloads)/sum (PINs) from V $ librarycache;

11) Check the contention of rollback segments

The sum (waits) value should be less than 1% of the sum (gets) value.

SQL> select sum (gets), sum (waits), sum (waits)/sum (gets) from V $ rollstat;

12) Check the number of rollback segments.

SQL> select name, shrinks from V $ rollstat, V $ rollname where V $ rollstat. USN = V $ rollname. USN; then ;-----------------------------------------------------------------------------



 

Several common check statements

1. Find the most sorted SQL:

SQL> select hash_value, SQL _text, sorts, executions from V $ sqlarea order by sorts DESC;

2. Find the SQL statements with the most disk reads and writes:

SQL> select * from (select SQL _text, disk_reads "Total disk", executions "Total EXEC ", disk_reads/executions "Disk/exec" from V $ SQL where executions> 0 and is_obsolete = 'n' order by 4 DESC) Where rownum <11;

3. Find the SQL statement with the largest workload (in fact, it is also sorted by disk read/write ):

SQL> select substr (to_char (S. PCT, '99. 00'), 2) | '%' load,s.exe cutions executes, P. SQL _text from (select address, disk_reads, executions, PCT, rank () over (order by disk_reads DESC) ranking from (select address, disk_reads, executions, 100 * ratio_to_report (disk_reads)
Over () PCT from SYS. V _ $ SQL where command_type! = 47) Where disk_reads> 50 * executions) s, sys. V _ $ sqltext P where S. ranking <= 5 and P. address = S. address order by 1, S. address, P. piece;

4. Use the following SQL tools to find out inefficient SQL statements:

SQL> select executions, disk_reads, buffer_gets, round (buffer_gets-disk_reads)/buffer_gets, 2) hit_radio, round (disk_reads/executions, 2) reads_per_run, SQL _text from V $ sqlarea where executions> 0 and buffer_gets> 0 and (buffer_gets-disk_reads)/buffer_gets <0.8 order
By 4 DESC;

5. view the SQL statement that the connection is running based on the SID.

SQL> select/* + push_subq */command_type, SQL _text, sharable_mem, memory, runtime_mem, sorts, version_count, counts, open_versions, users_opening, executions, users_executing, loads, statistics, statistics, parse_cils, disk_reads, buffer_gets, rows_processed, sysdate
Start_time, sysdate finish_time, '>' | Address SQL _address, 'n' status from V $ sqlarea where address = (select SQL _address from V $ session where Sid = & SID ); *************** analysis and solutions for low Oracle buffer hit rate **************** **

First, confirm the following query results:

1. query of the buffer hit rate (whether it is lower than 90% ):

Select round (1-sum (decode (name, 'Physical reads', value, 0)/(sum (decode (name, 'db block gets', value, 0) + sum (decode (name, 'consistent gets', value, 0), 4) * 100 | '%' chitrati from V $ sysstat;

 2. query the usage (data with or without free status is fast .):

Select count (*), status from V $ BH group by status;

3. query related wait events (whether there are related wait events)

Select event, total_waits from V $ system_event where event in ('free buffer waits ');

4. Current size (whether it is large)

Select value/1024/1024 cache_size from V $ parameter where name = 'db _ cache_size'

5. Top wait event analysis (whether the DB file scatered read ratio is large)

Select event, total_waits, suml from (select event, total_waits, round (total_waits/SUMT * 100,2) | '%' suml from (select event, total_waits from V $ system_event ), (select sum (total_waits) SUMT from V $ system_event) order by total_waits DESC) Where rownum <6 and event
Not like 'rdbms % 'and event not like 'pmon %' and event not like 'SQL * Net %' and event not like 'smon % ';

6. Recommended value for db_cache_advice (for new features after 9i, you can better adjust cache_size according to it)

Select block_size, size_for_estimate, size_factor, estd_physical_reads from V $ db_cache_advice;

Analysis:

Buffer hit rate (lower than 90, even if the hit rate is relatively low ).

If there is no free option, it does not necessarily mean that it needs to be increased. It also depends on the current cache_size size (whether we can increase it again, whether we need to increase hardware and overhead ),

Idle buffer wait indicates that the process cannot find the idle buffer and writes a gray buffer to accelerate the database writer to generate an idle buffer. When dbwn writes a block to a disk, the gray data buffer will be released, to reuse. generateCauseMainly:

1. dbwn may not be able to keep up with the gray Buffer: The I/0 system is slow and files should be evenly distributed across all devices as much as possible,

2. the buffer zone is too small or too large.

3. You can increase the number of db_writer_processes.

4. There may be a large thing, or a continuous large thing.

We need to observe whether this event exists for a long time and the value keeps increasing. If it keeps increasing, it means we need to increase the db_cache size. orOptimize SQL.

Scattered Data read waits usually show waiting related to full table scans. During logical reads, full table scans in the memory are generally scattered, instead of being consecutively dispersed to all parts of the buffer zone, indexes may be lost or indexed. This wait time is generated when the database session waits for the completion of multiple Io reads, and the specified number of blocks is discretely distributed in the data buffer zone. This means that there are too many full table scans, Or I/O insufficiency or contention,

Most of these events are problematic. This shows that a large number of scans are not indexed.

Db_cache_advice helps us adjust the size of db_cache_size, but this is just a reference and is not necessarily accurate.

Based on the comprehensive analysis of the above six cases, determine whether to increase the large cache_size or put frequently used (small) tables in the keep area.

However, most of the time,

The real problem is mainly the optimization of SQL statements (for example, whether a large number of full table scans exist)

Indexing is the most practical way to improve database performance and SQL statements without changing the program.

I have encountered a similar problem in production. The 21% M cache_size has a low hit rate of 96%. However, by optimizing SQL statements (adding indexes to avoid full table scanning), the hit rate has increased, the program running time is reduced from 2 hours to less than 10 minutes.

This raises the question of how to locate high-consumption SQL statements. The question of full table scan is not described in detail here. Here only describes the methods. I will introduce how to use these tools in relevant chapters.

1. SQL _trace traces the session. tkprof is used to output the disk read, logical read, and long-running SQL statements for optimization. These high-consumption SQL statements are generally accompanied by full table scan.

2. statspack analysis. Perform statistical analysis of time points during busy periods. Check whether dB file scatered read exists in top events. check whether there are any problems with top SQL statements.

In addition, we need to add that, of course, if the hardware permits, we should increase db_cache_size as much as possible to reduce disk read, but it is not as big as possible. We must adjust it according to the data volume of our own database, because the large db_cache_size will also increase the overhead of database management. Of course, the overhead may not significantly affect the performance of the database, and the hardware price is getting lower and lower. Therefore, we need to analyze specific problems, in my opinion, it is best to make the best use of everything. Do not waste it as much as possible and find the essence of the problem. Tuning is an artistic task.



 

***********************Oracle Database buffer hit rate*****************

1. view the buffer hit rate of the Oracle database

Select. value + B. value "logical_reads", C. value "phys_reads", round (100 * (. value + B. value)-C. value)/(. value + B. value) "buffer hit ratio" from V $ sysstat A, V $ sysstat B, V $ sysstat c Where. statistic # = 40 and B. statistic # = 41 and C. statistic # = 42;

2. Tags: Oracle

Database buffer hit rate:

SQL> select value from V $ sysstat where name = 'physical reads'; value 3714179 SQL> select value from V $ sysstat where name = 'physical reads direct '; value 0 SQL> select value from V $ sysstat where name = 'physical reads direct (LOB) '; value 0 SQL> select value from
V $ sysstat where name = 'consistent gets'; value 856309623 SQL> select value from V $ sysstat where name = 'db block gets'; value 19847790

The hit rate is calculated as follows:

Make x = physical reads direct + Physical reads direct (LOB)

Hit rate = 100-(physical reads-x)/(Consistent gets + db block gets-x) * 100

If the hit rate is lower than 90%, you should adjust the application to determine whether to increase the data addition rate.

Hit rate of the Shared Pool

SQL> select sum (pinhits)/sum (PINs) * 100 "Hit Radio" from V $ librarycache;

If the hit rate of the Shared Pool is lower than 95%, you need to adjust the application (usually the BIND VaR is not applied) or increase the memory.

Sorting

SQL> select name, value from V $ sysstat where name like '% sort % ';

If we find that the proportion of sorts (Disk)/(sorts (memory) + sorts (Disk) is too high, it usually means that the memory of sort_area_size is relatively small and you can consider adjusting the corresponding parameters.

About log_buffer

SQL> select name, value from V $ sysstat where name in ('redo entries', 'redo Buffer Allocation retries ');

If the ratio of redo Buffer Allocation retries/Redo entries exceeds 1%, we can consider adding log_buffer.

 

 

Http://www.codesky.net/article/201103/141622.html

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.