Database cache hit rate and wait events for Oracle Performance Optimization

Source: Internet
Author: User
Tags what dba

3. database cache size
 
We talked about how to share execution plans from the programmer's perspective. Next let's take a look at what DBA can do for the shared execution plan. First, we need to know that the execution plan of each statement is saved in the database cache. When the optimizer parses the statement, it first needs to go to the database cache, with the statement text as the condition, check whether there is an execution plan for this statement. If there is already an execution plan, it is obtained and handed over to the server process for execution. This is soft parsing. If the same statement does not exist in the database cache, the optimizer generates an execution plan for the statement, and then saves the generated plan to the database cache, which is hard to resolve. The size of the database cache is limited. If you have many statements, it is impossible to store the execution plan of each statement in the database cache. Assume that the user has issued A new statement A. After the optimizer finds the statement, it does not find the same statement in the database cache. The optimizer starts to perform hard parsing and generates Execution Plan. When the optimizer saves plan A to the database cache, it finds that the database cache has no free space, and the optimizer will clear the execution plan of the original statement from the database cache, free up available space to accommodate Plan. The plan to be cleared is called a victim, and the cleanup operation is called the statement aging ". When the aging statement is executed again, it must be hard parsed again. If the cache size of your database is relatively small, statements will frequently become aging. This virtually increases the number of hard resolutions. Therefore, the library cache cannot be set too small. If the library cache is too large, this will not work because white space occupies valuable memory resources. So what is the size of the database cache? There is no uniform standard, and you can only observe it with historical data. The observed standard is the number of soft and hard resolutions.
 
We have already said soft and hard parsing materials in the data view. Next, let's take a look at the soft and hard parsing data in the STATSPACK report. In the report, there is a Load profile section, which is called the summary information. The summary information contains soft and hard-parsed information:
 
Load Profile
 
~~~~~~~~~~~~ Per Second Per Transaction
 
------------------------------
 
Redo size: 16,233.08 422,060.00 (the number of logs generated per second or per transaction, in bytes) redo size
 
Logical reads: 1,413.08 36,740.00 (number of logical reads per transaction per second, in database blocks) session Logical reads
 
Block changes: 43.19 1,123.00 (number of changes per second or per transaction block) db Block changes
 
Physical reads: 1,198.92 (physical reads per second or per transaction, unit: Block) Physical reads
 
Physical writes: 0.00 (number of physical writes per second or per transaction, unit: Block) Physical writes
 
User cballs: 0.96 25.00 (calls per second or per transaction user) User cballs
 
Parses: 0.65 17.00 (number of resolutions per second or per transaction, including soft, soft, and hard parsing) parse count (hard)
 
Hard parses: 0.04 1.00 (per second or per transaction and number of hard parsing records) parse count (Hard)
 
Sorts: 2.38 62.00 (the number of sorting times per second or per transaction) sorts (memory), sorts (disk)
 
Logons: 0.00 0.00 (logins per second or per transaction) logons cumulative
 
Executes: 1.88 49.00 (the number of executions per second) execute count
 
Transactions: 0.04 (number of Transactions generated per second)
 
Hard parses indicates the number of times of Hard parsing. In general medium-sized OLTP applications, this value should be controlled within 100. If the number exceeds 100, there are too many hard resolutions and the execution plan is not shared. The reason for no sharing plan may be that the bound variable is not used, or the library cache is too small, and the statement is aging too fast. This value is just a reference for you. For accuracy, you should analyze it based on historical data.
 
Parses minus Hard parses is the number of soft resolution times, and this value should not be too much. Medium-scale OLTP is usually several hundred times per second. large OLTP applications may perform soft resolution thousands of times per second. (If the value is too large, no resolution should be used)
 
In addition to the summary information, there are also some "instance validity" that also contain parsing data:
 
Instance Efficiency Percentages (Target 100%)
 
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
 
Buffer Hit %: 15.16 In-memory Sort %: 100.00
 
Library Hit %: 99.01 Soft Parse %: 94.12
 
Execute to Parse %: 65.31 Latch Hit %: 100.00
 
Parse CPU to Parse Elapsd %: 100.00% Non-Parse CPU: 99.41
 
Which is related to the database cache and soft/hard parsing:
 
Library Hit %: Hit rate in the Library cache. Soft parsing means that the Library cache hits. This ratio should always be above 90%; otherwise, the library cache is too small or no bound variable is used.
 
Soft Parse %: calculates the formula 100 × (1-parse count (hard)/parse count (total), and the proportion of Soft resolution in all resolutions. If the value is smaller than <95%, hard Parsing is a bit complicated. If the number is lower than 80%, a serious problem occurs when the execution plan is shared. The solution is to increase the library cache or bind variables.
 
Execute to Parse %: measure of the number of statements executed and analyzed. This document does not help us in this lesson. I want you to understand him here. Resolution count/execution count
 
At last, there are two ratios of resolution time, which is not very helpful to us:
 
Parse CPU to Parse Elapsd %: 100 × parse time cpu/parse time elapsed, that is, the parsing time/wall clock time during parsing.
 
% Non-Parse CPU: formula: 100 × (1-(parse time cpu/CPU used by this session )). The percentage of non-resolution time in the total CPU usage of the session. This value is too low, indicating that the resolution consumes too much time.
 
Finally, pay attention to the shared information section in the report:
 
Shared Pool Statistics Begin End
 
------------
 
Memory Usage %: 52.76 54.75
 
% SQL with executions> 1: 65.90 77.17
 
% Memory for SQL w/exec> 1: 86.34 92.30
 
1) Memory Usage %: This item has nothing to do with parsing, but it is also part of the shared pool information. We will also introduce it here. It is the percentage of the Sharing pool being used. This number should be stable between 75% and 90%. That is, we cannot let the Shared Pool occupy too much memory and idle, which will bring more management burden. The management burden of a large shared pool is also greater. If you don't need such a large pool, you should reduce it. In turn, we cannot make the Shared Pool memory too much to free up, which will lead to unexpected aging of data in the shared pool. Based on experience, in general OLTP applications, this value should be within the range of 75% to slightly lower than 90%.
 
2) % SQL with executions> 1: the formula for calculating this data is: 100 × (1-Number of SQL statements executed only once/number of all SQL statements ), it is the percentage of SQL statements executed in the shared pool more than once in all SQL statements. The larger the number, the better. The larger the value, the more effective your execution plan is.
 
3) % Memory for SQL w/exec> 1: the formula for calculating this data is: 100 × (1-Memory occupied by SQL statements executed only once/Memory occupied by all SQL statements ). This document provides a good understanding of the proportion of memory occupied by statements with more than one execution count to the total memory occupied by statements.
 
Well, the parsing information in STATSPACK is described here. The proportion of soft and hard parsing indicates whether your database cache is large enough. In addition, if the database cache is not large enough, DBAs cannot directly tune the database cache, which is an important part of the shared pool. We can adjust the size of the Shared Pool to change the size of the database cache or other parts of the Shared Pool.
 
 
4. Library cache hit rate and V $ librarycache View
 
In the above Statspack report, we have mentioned the concept of library cache hit rate. Next we will use the view to talk about the library cache hit rate. The information in the view is more detailed than in the Statspack report. You can view the database cache in the V $ librarycache view. Before introducing this view, we will first introduce several concepts about database cache.
 
(1) database cache handle and database cache memory block.
 
Each object to be cached in the database is divided into multiple parts in the database cache for storage. This is like a whole chicken is divided into chicken legs, chicken wings, and chicken feet. Oracle aims to achieve more flexible memory management, because it is slower to search for large contiguous memory than to find small contiguous memory. We will not discuss it in depth here. Continue to the previous topic. If a database cache object (such as an execution plan of an SQL statement), the memory occupied by it is cut into four small pieces, they are stored in different locations in the database cache and are not connected to each other. In order to combine the four small blocks, the cache object of the Oracle database is allocated with a small block of memory, which contains the addresses of the other four small blocks of memory, there are also some basic information about the cache object in this database, such as the name and type. This memory is called the cache object handle of the database.
 
When accessing the database cache object, such as soft resolution, you must read the execution plan from the database cache. Oracle first finds the handle and reads the information in the handle, which is called a database cache Get. If the database cache does not contain the object's handle information, Oracle will re-allocate memory and construct a handle in the database cache. This means that the database cache handle is not hit (Get Miss ). On the contrary, if an object handle is found in the database cache, the database cache handle hits (Get Hit ). Get Miss occurs during hard parsing. Soft Parsing is Get Hit.
 
After the other memory block addresses in the handle are retrieved, each access to a memory block is called a database cache Pin. If the corresponding memory block is no longer in the memory, this is Pin Miss, and the Pin is not hit. The opposite is Pin Hit.
 
When the objects in the database cache change, some other objects will become invalid (Invalidation ). For example, if you modify the structure of a table, the execution plan of the SQL statement selected for the table will become invalid. In fact, most DDL operations on tables will invalidate the execution plan of related SQL statements. Even Grant and Revoke operations that seem unrelated to execution plan consumption will be ineffective. If you have a table named TAB1, you have published an operation like "Grant a permission on tab1 to a user" or "revoke a permission on tabl from a user, all execution plans related to the TAB1 table will be invalid. After the cache is invalid, all memory blocks except the handle are cleared. After the object is used, the object must be reloaded. In the preceding example, if you use DDL statements for TAB1, all execution plans that use TAB1 statements will be invalid. When you use these statements, you must re-parse the statements.
 
Therefore, you need to pay attention to using DDL statements. If you do not need to use DDL immediately, we 'd better wait until the database is not busy and then execute DDL. As a DBA, these 1.1 statements must be kept in mind that, unless executed immediately, any DDL statements will be executed when the database is not busy.
 
Additionally, the invalidation is not the same as the aging of the database cache object. Aging means that all memory blocks with objects in the handle are cleared. If it is invalid, only the object handle is retained in the database cache, but all other memory blocks are cleared. When an invalid object is called again, Oracle calls it to the database cache again. This operation is called Reload. Generally, Reload is irrelevant to Get, because Reload occurs only after the object is invalid, and the invalid object does not affect the object handle. Okay. Now let's take a look at the meaning of these values.
 
We have already talked about Get, Pin and their hit and miss, and what are Invalidation and Reload. In the OLTP system, the Get hit rate should be above 90%. The ratio of Reload to Pin times should be less than 1%. If the value range is exceeded, the usage of the database cache is incorrect. There are two main causes of the problem: No variable bound or the library cache is too small. However, the ratio of Reload to Pin is too high, or it may be caused by DDL execution during busy hours.
 
In addition, the first column of V $ librarycache is NAMESPACE, that is, NAMESPACE. It is equivalent to the type of objects stored in the database cache. What does namespace mean? Is the range where the object name is located. For example, the names of tables and columns are not in the same range. That is to say, the table name and column name can be repeated, and the user name and table and column name are not in the same range. The user name can also be the same as the table and column name. I can have a table named AA or a user named AA. The two AA instances are in different ranges and their namespaces are different. It seems that there is a phone number 12345678 in Beijing and a 12345678 phone number in Shanghai. Although the two phone numbers are the same, they do not cause problems because they are in different ranges. This range can also be said to be type.
 
I will show all the rows of V $ librarycache (only part of the column ):
 
SQL> select * from v $ librarycache;
 
NAMESPACE GETS GETHITS GETHITRATIO PINS PINHITS
 
------------------------------------------------------------------
 
SQL AREA 21811 4149. 190225116 120258 105272
 
TABLE/PROCEDURE 25152 16372. 650922392 60649 46008
 
BODY 4360 4098. 939908257 5931 5537
 
TRIGGER 320 251. 784375 1655 1576
 
INDEX 453 128. 282560706 2065 1531
 
CLUSTER 755 728. 964238411 2339 2296
 
OBJECT 0 0 1 0 0
 
PIPE 0 0 1 0 0
 
Java source 0 0 1 0 0
 
Java resource 0 0 1 0 0
 
Java data 0 0 1 0 0
 
We can see that there are a total of 11 namespaces in the database cache. We can basically say that there are 11 types of information in the database cache. A very important indicator is the library cache hit rate, which is the GETHITRAIO of all objects in the library cache. The calculation method is as follows:
 
Select 1-sum (gethits)/sum (gets) from v $ librarycache;
 
This ratio should be above 90%. Another important factor is the ratio of Reload to Pin, which should be less than 1%.
 
If the ratio is not reached, the solution is simple, the problem is that the SQL statement is not shared, or the sharing pool is too small.
 
For the Library cache Hit rate, you can also view the Library Hit % item in the STATSPACK report. We have already said this.
 
 
3. Library cache lock and Library cache pin wait events
 
Let's add that we didn't mention it in SG. Many of our optimization courses will not talk about it in SG order. Of course, we will also add a lot of practical content. So we must remember more.
 
Wait events are everywhere in Oracle. to record this data, some performance is lost. But have you come up with a question? Do you have a good idea, or can you easily find out where the problem is? Moreover, even if you don't use these waiting events and materials, Oracle will consume CPU to record them. Therefore, a good DBA must be familiar with all kinds of waiting events and materials. Next, we will introduce two wait events related to the Library cache, such as Library cache lock and Library cache pin.
 
We mentioned above that objects in the database cache are cut into multiple memory blocks in the database cache, and another object handle records the address of each memory block and other information. When you want to modify the information in the handle, you need to apply an exclusive lock to the handle. If another process needs to read or write the information in the handle at this time, it must wait. At this time, the waiting time is recorded in the Library cache lock event by Oracle. The memory block of the read and write objects cannot be performed at the same time. If someone is writing, your read operations must wait. The wait event of the read/write memory block is the Library cache pin. If the two wait events are too many, it also indicates that the library cache is too small or there is no shared execution plan. Or, when you use DDL when the database is busy, there will also be two waiting events.
 
 
Iv. database cache View
 
There is a view that shows the information of the objects cached in the database cache, Which is V $ db_object_cache.
 
OWNER: object OWNER
 
NAME: Object NAME
 
DB_LINK: Database Link name
 
NAMESPACE: NAMESPACE. For an SQL CURSOR, its namespace is CURSOR. The namespaces and types have similar meanings. In addition to CURSOR, there are also tables, indexes, and so on.
 
TYPE: TYPE
 
SHARABLE_MEM: Memory occupied
 
LOADS: Number of times objects are loaded
 
EXECUTIONS: Number of times the object is executed
 
LOCKS: number of sessions of the object being locked
 
PINS: number of sessions of the Pin object in progress
 
KEPT: whether to use DBMS_SHARED_POOL.KEEP to keep the object in the Shared Pool
 
CHILD_LATCH: corresponding sub-Latch
 
INVALIDATIONS: Number of invalid objects
 
 
V. Differences between OLAP and OLTP
 
The library cache adjustment we mentioned above is suitable for OLTP, but not for most OLAP systems. What is OLTP? It is also called online transaction processing. Applications on various websites, BBS, or ATM machines of banks, and applications on front-end computers of banks are OLTP applications. OLTP focuses on data collection, but it does not analyze data. OLAP, also known as online applications, mainly refers to applications that analyze, process, find patterns or predict trends based on previous data. Applications such as data warehouse, DSS (enterprise decision-making system), and data mining are OLAP applications. For example, I don't know if we have heard of a story about beer and diapers. (Add this story here) in this story, we finally find that the program associated with beer and diapers is an OLAP application. The application in the cashier of the Walmart supermarket is the OLTP application. OLTP collects data and performs OLAP analysis.
 
The number of concurrent OLTP sessions may be very large, but they all execute short transactions or queries, and most statements are similar. Therefore, sharing similar statements executed is very important to OLTP.
 
However, OLAP may have few concurrent sessions, and queries are the primary cause. Because the main task of OLAP is to analyze data. However, OLAP queries often take a long time. For OLAP, the execution plan of shared statements is unnecessary. Therefore, for OLAP, the size of the shared pool can be as small as possible.
 
We will talk about the differences between OLTP and OLAP in the following content. We will talk about the differences between OLTP and OLAP, and who is designed for different Oracle features.
 
 
 
This article is from the "ye shaochen" blog

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.