How to adjust the performance of Oracle Application Systems

Source: Internet
Author: User
Principles and methods for adjusting the performance of Oracle Application Systems

Reprinted Source: China IT lab Author: Unknown


Abstract:This article discusses and studies the features and principles of Oracle server and client/server, and describes some principles and methods for improving and adjusting the performance of Oracle application systems.

Keywords:Oracle; customer/server; full system zone; network I/O; rollback segment.

Oracle databases are widely used in various fields of society, especially in the Client/Server mode. However, application developers often encounter the problem that the performance of the entire system decreases significantly as the data volume increases, to solve this problem, the following aspects are involved: database server, network I/O, and ApplicationProgramTo make full use of Oracle's efficiency and improve the performance of the entire system.

1. Adjust the performance of the Database Server

The Oracle database server is the core of the entire system. Its performance directly affects the performance of the entire system. To adjust the performance of the Oracle database server, we mainly consider the following aspects:

1.1 adjust the operating system to fit the Oracle Database Server

The Oracle Database Server depends largely on the operating system of the running server. If the operating system cannot provide the best performance, the Oracle database server cannot achieve its expected performance no matter how it is adjusted.

1.1.1 plan system resources for the Oracle Database Server

Based on the available resources of existing computers, the principle of planning to allocate resources to Oracle servers is to maximize the use of resources on Oracle servers as much as possible, especially in the Client/Server, try to allow all resources on the server to run the Oracle service.

1.1.2 adjust the memory configuration in the Computer System

Most operating systems use virtual storage to simulate larger memory on a computer. It is actually a certain disk space on the hard disk. When the actual memory space cannot meet the requirements of the application software, the operating system will use this part of the disk space to replace the page information in the memory, this will cause a large number of disk I/O operations, reducing the performance of the entire server. To avoid excessive use of virtual memory, increase the computer memory.

1.1.3 set the operating system process priority for the Oracle Database Server

Do not adjust the priority of Oracle processes in the operating system, because in the Oracle database system, all backend and front-end database server processes perform equally important tasks and require the same priority. Therefore, during installation, all database server processes run with the default priority.

1.2 Adjust Memory Allocation

The Oracle Database Server retains three basic memory caches, corresponding to three different types of data: database cache, Dictionary cache, and buffer cache. The database cache and dictionary cache form a shared pool. The shared pool and the buffer cache form the system full-Process Zone (SGA ). SGA is a full-Process System zone for quick access to database data. If SGA needs to be released and allocated frequently, it cannot achieve quick access to data, therefore, the SGA should be placed in the primary memory instead of in the virtual memory. Memory adjustment mainly refers to adjusting the size of the memory structure consisting of SGA to improve system performance. Because the memory structure requirements of the Oracle database server are closely related to applications, therefore, the memory structure should be adjusted before the disk I/O adjustment.

1.2.1 database buffer adjustment

The database buffer contains private and shared SQL and PL/SQL zones. the hit rate of the database buffer is determined by comparing the hit rate. To adjust the database buffer, you must first understand the activity of the database buffer. The activity statistics of the database buffer are kept in the Dynamic Performance Table V $ librarycache data dictionary, you can query the table to learn about its activities and determine how to adjust it.


        
         
Select sum (PINs), sum (reloads) from V $ librarycache;

        

The pins column provides SQL statements, PL/SQL blocks, and the total number of times the accessed object is defined; the reloads column provides implicit Analysis of SQL and PL/SQL blocks or errors that occur in the Library Buffer during object definition reloading. If sum (PINs)/sum (reloads) ≈ 0, the hit rate of the database buffer is appropriate. If sum (PINs)/sum (reloads)> 1, you need to adjust the initialization parameter shared_pool_size to re-adjust the amount of memory allocated to the shared pool.

1.2.2 Data Dictionary Buffer Adjustment

The Data Dictionary Buffer contains the structure, user, and entity information about the database. The data dictionary hit rate has a significant impact on system performance. The usage of the Data Dictionary Buffer is recorded in the Dynamic Performance Table V $ librarycache. You can query this table to learn about its activity and determine how to adjust it.


        
         
Select sum (gets), sum (getmisses) from V $ rowcache;

        

The gets column is the statistics on the number of requests for the corresponding item; The getmisses column is the number of requests for the data that causes a buffer error. For frequently accessed data dictionary buffers, sum (getmisses)/sum (gets) <10% ~ 15%. If the percentage is greater than this, you should consider increasing the capacity of the Data Dictionary Buffer, that is, you need to adjust the initialization parameter shared_pool_size to re-adjust the amount of memory allocated to the shared pool.

1.2.3 adjustment of buffer cache speed

All data accessed by a user process is accessed through the buffer cache. Therefore, the hit rate of this part is crucial to performance. The usage of the buffer cache is recorded in the Dynamic Performance Table V $ sysstat. You can query this table to learn about its activity and determine how to adjust it.


        
         
Select name, value from V $ sysstat where name in ('dbblock gets', 'consistent gets', 'Physical reads ');

        

The values of dbblock gets and consistent gets are the total number of reads in the request data buffer. Physical reads is the number of times files are read from the disk when data is requested. The possibility of reading from the buffer cache is called the buffer hit rate. formula:

Hit ratio = 1-(physical Reds/(dbblock gets + consistent gets ))

If the hit ratio is <60% ~ 70%, the parameter value of db_block_buffers should be increased. Db_block_buffers can adjust the amount of memory allocated to the buffer cache, that is, db_block_buffers can set the number of data blocks allocated to the buffer cache. The total number of bytes in the buffer cache = the value of db_block_buffers * db_block_size. The value of db_block_size indicates the number of bytes of the data block size. You can query the V $ parameter table:


        
         
Select name, value from V $ parameter where name = 'db _ block_size ';

        

After modifying the initialization parameters of the above database, you must first shut down the database and restart the database to make the new settings take effect.

1.3 adjust disk I/O

The disk I/O speed has an important impact on the entire system. The disk I/O problem can be solved to significantly improve the performance. The main reasons that affect disk I/O performance include disk competition, excessive I/O times, and allocation and management of data block space.

When creating a new file for the Oracle database server, whether it is the data file used by the tablespace or the log file used for data transaction login, the available disk resources on the database server should be carefully considered. If multiple disks exist on the server, files can be distributed and stored on each available disk to reduce competition for database data files and transaction log files, this effectively improves the server performance. For different application systems with their own datasets, you should create different tablespaces to store the data of their respective application systems, and store the data files corresponding to the tablespace on different disks as much as possible, this method of physically storing the tablespace of each application system can eliminate the possibility of two application systems competing for disks. Data Files and transaction log files are stored on different disks, so that disk access to transaction processing does not impede disk access to the corresponding transaction log registration. If multiple disks are available, placing two transaction log members on different disk drives can eliminate the possible disk competition caused by log files. Table data and index data of an application should be stored in different tablespaces, and different types of tablespaces should be stored on different disks as far as possible, this eliminates the disk competition between table data and index data.

1.4 adjust the rollback segment of the Database Server

A rollback segment is a storage region in which the database uses the original data values of rows updated or deleted by a transaction. If you want to roll back the changes made by a transaction, the database reads the data before the change from the rollback segment and changes the rows affected by the transaction to their original state. Rollback segments control the database's ability to process transactions, and thus play a key role in the database's success. No matter how well the rest of the database is designed, if it is not properly designed, it will seriously affect the system performance. The principles for creating and adjusting rollback segments are as follows.

1.4.1 separate rollback segments

A separate rollback segment is to create more than one tablespace for the rollback segment separately, so that the rollback segment is separated from the data dictionary, user data, and index. Since the write of the rollback segment is parallel to the write of data and indexes, separating it can reduce I/O contention. If the rollback segment is not separated from the data, if a tablespace is to be taken offline or undo, The tablespace cannot be taken offline or undo before all rollback segments in the tablespace are taken offline. Once the tablespace is unavailable, all rollback segments in the tablespace cannot be used, which wastes all allocated disk space. Therefore, independent rollback segments make database management easier. The frequent contraction of rollback segments makes it easier for free blocks in the tablespace to form fragments. Separating rollback segments can reduce the fragmentation of the tablespace in the database.

1.4.2 create rollback segments of different sizes

For some online transaction processing, they often modify a small amount of data frequently. It is advantageous to create many small rollback segments. The entry item of each thing can only be limited to one rollback segment. The rollback segment should be sufficiently large to accommodate a complete transaction. Therefore, a larger rollback segment is required for some large objects. A large amount of rollback information is generated when a transaction is processed offline. An extra large rollback segment is required for processing. Based on these theories, the Oracle database server creates three groups for the preceding three types of transaction processing: small transaction groups, large transaction groups, and large transaction groups with different sizes, in addition, it is dispersed to three different tablespaces, and the group size is the same, it should meet the maximum requirements for processing the group of things.

1.4.3 create a proper number of rollback segments

Generally, the number of rollback segments is related to the number of concurrent transactions. The following provides a reference number for establishing a rollback segment due to the number of concurrent transactions:


        
         
Concurrent transaction (n) rollback segments n <16 416 ≤ n <32 8n ≥ 32 N/4

        

2. Adjust the network I/O in Client/Server mode

Application Processing in the client/server environment is distributed between customer applications and database service programs. In the client/server environment, the network I/O between the client and the server is the bottleneck for improving the performance of the entire system. The less network I/O a customer application causes, the better the performance of applications and the entire system. The most important principle for reducing network I/O is to concentrate application logic on database servers.

2.1 full restrictions on Oracle databases

When creating a table for an application, appropriate integrity constraints should be imposed on some data with special requirements, so that the database itself rather than the application can meet certain conditions. The execution of complete constraints on the database server is optimized in a system mechanism lower than the SQL statement level. It is independent of the client and only runs on the server, you do not need to pass SQL statements between the client and the server to effectively reduce the network I/O burden.

2.2 use database Triggers

The complete constraint can only implement some simple data constraints, and there is nothing to do with some complicated transaction processing rules. In this case, it is best not to implement complex program control in applications, instead, database triggers should be used to implement complex transaction rules. Database Triggers can be implemented by the database itself, rather than applications, to constrain data to comply with complex transaction processing rules, and are easy to create, easy to manage, and avoid a large number of network I/O.

For example, you can dump the history records in Table A to table B as lsbs.


        
         
Implementation in applications: using database triggers: beign create trigger delete1 update a set lsbs = 'T '; after update of lsbs on a insert into B for each row select * from a where lsbs = 'T'; insert into B delete a where lsbs = 'T '; select * from a where: New. lsbs = 'T'; end; delete a where: New. lsbs = 'T'; end delete1;

        

In applications, data transmitted by all SQL command requests must be exchanged between the client and the server over the network. Unlike database triggers, SQL itself is on the server, you do not need to transmit data over the network. When the amount of data to be operated is large and multiple users operate simultaneously, complex control is implemented in the application, which will definitely increase the network I/O load, this reduces the performance of the entire system, while using database Triggers can completely avoid this situation.

2.3 use stored procedures, stored functions, and packages

Oracle stored procedures and stored functions are named collections of PL/SQL functions that can complete certain functions and are stored on the server. Package is a method that encapsulates related processes and function organizations into a database program unit. These SQL commands are stored on the server side relative to the process and function of the application. Using Stored Procedures and stored functions, an application does not have to include SQL statements for multiple network operations to perform database server operations, but simply calls stored procedures and stored functions, only the name and output result of the call process are transmitted over the network, which can reduce a lot of network I/O.

For example, the definitions of base tables A and B: Name char (20); detail char (10); 1 million records in Table, the application retrieves records that meet the given conditions in the detail column from base table A and inserts the records into base table B.


        
         
Declarecursor cursor1 is select * From A; poin cursor1 % type; con1 number (2); RES1 char (4) = 'abcd'; begin insert into B values (poin. name, poin. detail); end if; For poin cursor1 loop end loop; For con1 in 1 .. 7 loop commit; If substr (poin. detail, con1, 4) = RES1 then end;

        

If the button trigger in developer/2000 from is directly implemented using PL/SQL and rewritten as an oracle stored procedure, and then compared by calling this procedure in from, the latter significantly improves performance.

When considering the above three methods: first, consider using a complete constraint. For database triggers and stored procedures, if you need all the programs that access the database to automatically implement certain rules or checks, use the database Triggers. If you only need to implement certain rules or checks for a few programs, you can create a process for the program to call.

3. Application Adjustment

3.1 SQL statement Optimization

The execution speed of SQL statements may vary depending on many factors. However, the main influencing factors are: driving the table, executing the operation sequence, and applying the index. These factors can be indirectly changed by many different methods to achieve optimal execution speed. Here we mainly discuss the optimization principles that should be followed when connecting to multiple tables:

(1) columns used for join clauses should be indexed, and indexes should be used as far as possible in the WHERE clause, rather than avoiding indexes.

(2) The Connection operation should be driven by a small number of rows returned.

(3) If tables A and B are connected, the length of Table A is much larger than that of Table B. We recommend that you drive the data from a large table.

(4) If the WHERE clause contains a selective condition where no = 20, place the most selective part at the end of the expression.

(5) If only one table has an index and the other table has no index, a non-index table is usually used as the driving table. If the no column of Table A is indexed, And the no column of Table B is not indexed, table B should be used as the driving table and Table A as the driving table.

(6) If the columns used for connection and other selection condition columns in the WHERE clause have indexes, the query validity and selectivity of each index are determined based on the specific data composition in the table, to select an Optimization Path, consider which columns in the clause can use indexes, which indexes are unique, and the number of rows in the table to be queried.

3.2 Create and use views and Indexes

A view can be used to cut down or hide a part of data from columns or rows in a base table. It can also display complex queries involving multiple tables in a view, making application development simple and fast. Using indexes can improve query performance, reduce disk I/O, Optimize Query of data tables, and accelerate SQL statement execution. However, index creation can improve performance at any time. When to create an index, the following principles should be observed: This table is often used for queries on index columns, and this table is not often updated, inserted, or deleted, the number of query results should be controlled to 2% ~ in the original table ~ 4%.

3.3 Use the Oracle array Interface

When a customer application inserts a row or uses a query to request a row from the server, it does not send a network packet with a single row, but uses an array for processing, that is to say, buffer multiple rows to be inserted or multiple rows to be retrieved in the array, and then transmit these arrays online through a few packets. For example, if a given SELECT statement returns 2000 rows of data, the average size of each row is 40 bytes, the data packet size is 4 kb, And the array size parameter (arraysize) is set to 20, the server sends 100 data packets to the client. If you simply set the (arraysize) to 2000, the same operation only needs to transmit 20 packets. This reduces the amount of network transmission and improves the performance of all applications.

4. Summary

when developing an application, we adjusted the system according to the above methods and principles, and received satisfactory results. However, it should be noted that the client, network, and server components must be adjusted and synchronized to produce the best performance. Therefore, the system should also be based on the specific circumstances of the system, specific analysis and adjustment.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.