ORACLE Database Performance Optimization-) memory disk

Source: Internet
Author: User

ORACLE Database Performance Optimization-) memory disk
1. Memory Structure Optimization Overview 1.1 buffer zone

Buffer zones that affect database performance include SGA that can be shared and pga that are private to server processes, sga also includes the Shared Pool, large pool, java pool, data buffer, stream pool, and redo log buffer.

1.2 Automatic Memory Management

Oracle generally uses automatic memory management to manage system memory. oracle automatically manages and adjusts the memory size of database instances. In automatic management mode, first configure the initialization parameters MEMORY_TARGET (target memory size) and MEMORY_MAX_TARGET (maximum memory size), and adjust the target memory size for the database, adjust the buffer size of the sga and pga as needed.

1.3 Automatic Management of Shared Memory sga

If the initialization parameter SGA_TARGET is set to 0 when the database instance is started, the automatic shared memory management is disabled. The size of each buffer is determined by the corresponding parameters in the initialization parameter file. If necessary, you can manually set the initialization parameters DB_CACHE_SIZE, SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE, and STREAMS_POOL_SIZE to adjust the size of each buffer in the shared memory area. Generally, some memory components need to be manually set by the Administrator and are not affected by automatic memory management, such as redo log buffer, KEEP, RECYCLE, and Buffer Based on Non-standard data blocks; fixed SGA and other internally allocated memory areas. When manually adjusting these memory areas, you need to set the initialization parameters DB_KEEP_CACHE_SIZE, DB_RECYCLE_CACHE_SIZE, DB_nK_CACHE_SIZE, and LOG_BUFFER. The allocation of these memory areas is obtained from the memory specified by SGA_TARGET, the remaining part of SGA_TARGET is used in each buffer zone for Automatic Memory Sharing Management.

1.4 dynamically change the buffer size

If the SYSTEM does not use automatic memory management or automatic Memory Sharing Management, you can use the alter system statement to configure the initialization parameters DB_ACHE_SIZE, JAVA_POOL_SIZE, LARGE_POOL_SIZE, LOG_BUFFER, and SHARED_POOL_SIZE.

2. Data Buffer Adjustment

Check whether the data buffer settings are reasonable in the following two ways:

 

View the hit rate of the database buffer and run the following command:

Select 1-(physical. value-direct. value-lobs. value)/logical. value)

"Buffer Cache Hit Ratio"

From v $ sysstat physical, v $ sysstat direct, v $ sysstat lobs, v $ sysstatlogical

Where physical. name = 'physical reads'

Anddirect. name = 'physical reads direct'

Andlobs. name = 'physical reads direct (lob )'

Andlogical. name = 'session logical reads ';

Running result:

SQL> select 1-(physical. value-direct. value-lobs. value)/logical. value)

2 "Buffer Cache HitRatio"

3 from v $ sysstatphysical, v $ sysstat direct, v $ sysstat lobs, v $ sysstat logical

4 where physical. name = 'physical reads'

5 and direct. name = 'physicalreads Direct'

6 and lobs. name = 'physical readsdirect (lob )'

7 and logical. name = 'sessionlogical reads ';

Buffer Cache Hit Ratio

----------------------

0.958117758479709

SQL>

In practice, when the above statement is greater than 0.9, it indicates that the adjustment is sufficient. The hit rate is high, which is already 0.95, so the hit rate is quite impressive. For database systems, the DEFAULT buffer pool included in the data buffer zone is sufficient for applications.

Query the logical and physical read information of each buffer in the data buffer. You can query the dynamic performance view v $ buffer_pool_statistics. For example:

SQL> selectt1.name, response _ READS, t1.DB _ BLOCK_GETS, t1.CONSISTENT _ GETS, 1-(t1.PHYSICAL _ READS/(t1.DB _ BLOCK_GETS + t1.CONSISTENT _ GETS) "Hit Ratio"

2 from v $ buffer_pool_statisticst1

3;

NAME PHYSICAL_READS DB_BLOCK_GETSCONSISTENT_GETS Hit Ratio

------------------------------------------------------------------------

DEFAULT 44480354387 571096618 33595740186-0.3018575

 

SQL>

Adjust the data buffer size

If the data buffer is less than 90%, you must consider adjusting the data buffer size. Before increasing the data buffer, check V $ DB_CACHE_SIZE (select * from v $ db_cache_advice) to determine how much the data buffer increases can significantly reduce the number of physical I/O, then, modify the initialization parameter DB_CACHE_SIZE dynamically. The initialization parameter DB_CACHE_SIZE corresponds to the standard data buffer size. to modify the non-standard data buffer size, you need to modify the initialization parameter DB_nK_CACHE_SIZE (n value is 2, 4, 8, 16, 32, and n cannot be the size of the standard data block)

3. Adjust the 3.1 library cache hit rate in the Shared Pool

You can query the dynamic performance view v $ librarycache (select * from v $ librarycache;), which saves the statistics of the database cache activity since the last time the database was started, each row of records reflects the statistical information of an entry type in the database cache. The namespace column value is used to identify each entry type. For example:

Selectt1.NAMESPACE, t1.PINS, t1.PINHITS, t1.RELOADS, t1.INVALIDATIONS from v $ librarycachet1 order by t1.NAMESPACE;

SQL> selectt1.NAMESPACE, t1.PINS, t1.PINHITS, t1.RELOADS, t1.INVALIDATIONS from v $ librarycachet1 order by t1.NAMESPACE;

NAMESPACE PINS PINHITS RELOADS INVALIDATIONS

-----------------------------------------------------------------------------------------------------------

App context 2 1 0 0

BODY 28468616 28458242 1818 46

CLUSTER 114902 114198 0 0

DBINSTANCE 0 0 0 0

DBLINK 0 0 0 0

DIRECTORY 371 81 0 0

EDITION 10577860 10577856 0 0

INDEX 403451 315690 11194 0

Java data 3161 1876 0 0

Java resource 3413 2124 0 0

Java source 3413 2121 0 0

Object id 0 0 0 0

PIPE 2940 2936 0 0

QUEUE 755296 754776 218 0

RULE 732 365 82 0

RULESET 18358 17693 26 0

SCHEMA 0 0 0 0

SQL AREA 1252576540 1223849814 1430167 926624

Subpartition 50182 49948 6 0

TABLE/PROCEDURE 162038767 161228629 429539 263

NAMESPACE PINS PINHITS RELOADS INVALIDATIONS

-----------------------------------------------------------------------------------------------------------

Temporary index 25602 0 7514 0

Temporary table 70046 0 55020 0

TRIGGER 12753767 12747803 1146 0

Xdb config 1 0 0 0

Xml schema 27 6 4 0

25 rows selected

SQL>

PINS: Number of Object Requests for a specific NAMESPACE

PINHITS: Number of Object Requests for a specific NAMESPACE, number of times existing in the database cache

RELOADS: Number of times objects need to be loaded from the disk

INVALIDATIONS: the number of times the dependent object is identified as a failed object for a specific namespace.

Calculation Formula for library cache hit rate (sum (t1.PINHITS)/sum (pins), for example:

SQL> select sum (t1.PINHITS)/sum (pins) "Lib Ratio" from v $ librarycache t1;

LibRatio

----------

0.97973692

SQL>

You can also query the dynamic performance view v $ sgastat to obtain the number of idle memory in the shared pool. For example:

SQL> select t1. * from v $ sgastat t1 wheret1.NAME = 'free money' and pool = 'shared pooled ';

POOL NAME BYTES

------------------------------------------------

SQL>

Awr report

3.2 data dictionary cache hit rate

Generally, if there is sufficient database cache space in the shared pool, the data dictionary cache will also be sufficient. In some cases, the data delay will fail. For example, when the database instance is just started, the database dictionary cache does not contain any data. Execution of any SQL statement will cause the data dictionary cache to fail. As the number of data cache dictionaries increases, the cache will fail less, at last, a stable hit rate will be reached, and the failure rate of data dictionary cache will be very low.

You can query the dynamic database performance view v $ rowcache (select * from v $ rowcache;) to obtain the statistics of different types of data dictionary entries. This shows the usage of the data dictionary since the last database was started.

SQL> selectt1.PARAMETER, sum (t1.GETS), sum (t1.GETMISSES), 100 * sum (t1.GETS-t1.GETMISSES)/sum (t1.GETS) pct_succ_gets, sum (t1.MODIFICATIONS) updates from v $ rowcache t1 where t1.GETS! = 0 group by t1.PARAMETER;

Parameter sum (T1.GETS) SUM (T1.GETMISSES) PCT_SUCC_GETS UPDATES

------------------------------------------------------------------------------------

Dc_constraints 39950 16050 59.8247809762 39942

Qmc_app_cache_entries 1 1 0 0

Sch_lj_oids 33696 1452 95.6908831908 0

Dc_tablespaces 994784326 74 99.9999925612 2

Dc_awr_control 271427 3 99.9988947304 6322

Dc_object_grants 41203543 6097 99.9852027288 0

Dc_histogram_data 649621988 1065618 99.8359633725

Dc_rollback_segments 25496538 235 99.9990783062

Dc_sequences 156506 1536 99.0185679782 156506

Sch_lj_objs 4018 796 80.1891488302 0

Dc_segments 130394426 139507 99.8930115310 77938

Dc_objects 225599535 310991 99.8621490952 96415

Dc_histogram_defs 217299674 1852549 99.1474681181 646908

Dc_table_scns 4801 4801 0 0

Dc_users 1300331635 10744 99.9991737492 789

Qmtmrcin_cache_entries 1 1 0 0

Outstanding_alerts 413246 2893 99.2999327277 6092

Dc_files 36345 49 99.8651809052 6

Dc_global_oids 28777369 6959 99.9758178032 235

Dc_profiles 5282207 1 99.9999810685 0

Parameter sum (T1.GETS) SUM (T1.GETMISSES) PCT_SUCC_GETS UPDATES

------------------------------------------------------------------------------------

Global database name 12620366 16 99.9998732207 0

Qmtmrctn_cache_entries 13 13 0 0

Qmtmrciq_cache_entries 1 1 0 0

Qmtmrctq_cache_entries 586 585 0.17064846416 0

Qmrc_cache_entries 7 7 0 0

25 rows selected

SQL>

PARAMETER: identifies a data dictionary entry of a specific type, prefixed with dc.

GETS: Total number of data requests for feature data dictionary entries

GETMISSES: displays the number of data that requires disk I/O operations because the data dictionary cache fails.

MODIFICATIONS: displays the number of times data is updated in the data dictionary cache.

3.3 adjust the size of the Shared Pool

In general, the library cache hit rate should be higher than 99%, and the data dictionary cache hit rate should be higher than 90%. Otherwise, the size of the shared pool needs to be adjusted. Because the size of the database cache and the size of the data dictionary cache are not set separately, oracle automatically allocates data in the Shared Pool according to certain algorithms and the memory space allocation algorithm in oracle. If the database cache hit rate is high, the cache hit rate of data dictionary is also high.

3.4 redo log buffer adjustment

Redo log buffer is used to store data modification information. Redo logs are first written to the redo log buffer. Under certain conditions, the LGWR process writes the redo log buffer information to the redo log file; if the redo log buffer capacity is large, it can ensure that there is sufficient space to store new redo records and provide conditions for the LGWR process to efficiently write the redo log file; if the redo log buffer is full and there is no space to accommodate new redo log records, the newly generated redo log records are in the waiting status, which is called a redo log buffer write failure. Writing too many redo logs fails, indicating that the redo log buffer is too small, affecting the database performance.

You can check the write failure rate of the redo log buffer in the following way.

Select t1.NAME "request", t2.NAME, t1.VALUE/t2.VALUE "Fail Ratio"

From v $ sysstat t1, v $ sysstat t2

Where t1.name = 'redo log space requests' andt2.NAME = 'redo entries ';

The result is as follows:

SQL> select t1.NAME "request", t2.NAME, t1.VALUE/t2.VALUE "Fail Ratio"

2 from v $ sysstat t1, v $ sysstat t2

3 where t1.name = 'redo log spacerequests' and t2.NAME = 'redo entries ';

Request NAME Fail Ratio

Certificate ------------------------------------------------------------------------------------------------------------------------------------------

Redo log space requests redoentries 0.00355035

SQL>

Generally, the write failure rate of the log buffer should be close to 0. If the failure rate is greater than 1%, it indicates that the log buffer is too small and the LOG_BUFFER size should be increased.

The write failure rate is 0.003, far smaller than 1, which proves that the log buffer is suitable.

4. PGA Adjustment

PGA is the memory area that stores private data and control information of server processes. Each server process can only access its own PGA zone. This workspace can be included in the PGA.

Generally, you can set the PGA_AGGREGATE_TARGET parameter to 20% of the SGA, and then run representative workloads, statistics, and check the running status of PGA in the database, and make appropriate adjustments.

In oracle databases, sorting can be performed in the sorting area of pga and temporary segments of temporary tablespace. Since I/O operations on disks are required to use temporary segments, the sorting efficiency is reduced. Therefore, we recommend that you sort disks in the sorting area as much as possible.

Based on the data sorting volume in the sorting area and the disk sorting volume, you can calculate the ranking hit rate in the sorting area,

Select memory1.NAME "memorysort", disk1.name "disk sort", (1-disk1.value/memory1.value) * 100 "HitRatio"

From v $ sysstat memory1, v $ sysstat disk1

Where memory1.NAME = 'sorts (memory) 'anddisk1.NAME = 'sorts (disk )';

The result is as follows:

SQL> select memory1.NAME "memorysort", disk1.name "disk sort", (1-disk1.value/memory1.value) * 100 "HitRatio"

2 from v $ sysstat memory1, v $ sysstat disk1

3 where memory1.NAME = 'sorts (memory) 'and disk1.NAME = 'sorts (disk )'

4;

Memory sort disk sort HitRatio

Certificate ------------------------------------------------------------------------------------------------------------------------------------------

Sorts (memory) sorts (disks) 99.9999420

SQL>

Under normal circumstances, the hit rate of the sorting area should be higher than 95%, that is, the vast majority of sorting operations should be carried out in the memory. Otherwise, the size of the sorting area should be adjusted to increase the SORT_AREA_SIZE. The hit rate is 99.99.

5. Database fragmentation 5.1 tablespace fragmentation

View the size of each tablespace: SelectTablespace_Name, Sum (bytes)/1024/1024 From Dba_Segments Group By Tablespace_Name

View the remaining tablespace size: SELECTtablespace_name tablespace, sum (blocks * 8192/1000000) residual space M FROMdba_free_space group by tablespace_name;

Table space fragments are caused by the creation, expansion, and deletion of segments.

Two schemes can be used for operation

1 is the altertablespace PLCRM coalesce command.

SQL> alter tablespace PLCRM coalesce;

Tablespacealtered.

SQL>

2. Use the export program to export the data, use truncate to delete the data in the table, and then use import to import the data.

5.2 The table shards are idle.

Number of physical spaces allocated to the table: SelectSegment_Name, Sum (bytes)/1024/1024 From User_Extents Group By Segment_Name;

Actually used space: selectnum_rows * avg_row_len from user_tables where table_name = 'emp ';

Reclaim the table's free space: altertable tablename deallocate unused;

5.3 index fragmentation

(1). recreateindex

(2). alterindex skate. idx_test rebuild nologging parallel 4 online;

(3) If the leaf block is half-emptyor near empty, Run "alter index coalesce" to merge

6 disk I/O optimization and adjustment

Sort according to the following rules:

(1) database physical files, including data file log files and control files, should be distributed to different disks as much as possible to avoid disk competition between them, and Disk Load Balancing can also be achieved.

(2) evenly distribute data files in the same tablespace to different disks to achieve load balancing between disks.

(3) distribute all log files to different disks as much as possible to reduce the competition between log files and disks.

(4) Try to distribute tables and indexes to different tablespaces, store table data and index data to different disks, and reduce competition between data files and index files on disks.

(5) create different tablespaces for different applications and store the data files corresponding to the tablespaces to different disks to reduce disk competition between different applications.

(6) do not allocate the system tablespace to other applications. Create a disk competition between the db system and ing.

(7) Create an undo tablespace for managing non-system rollback segments to prevent disk competition from affecting the completion of transactions.

(8) use local management as much as possible for tablespace management. Use automatic management for Bucket allocation to avoid the generation of fragments and the emergence of row connections and row migration.

(9) based on the characteristics of the table and the size of the data volume, use partition tables, partition indexes, index optimization tables, clustering, and other structures to reasonably distribute data to different data files, provides system I/O performance.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.