Explain the Oracle database optimization solutions and practices

Source: Internet
Author: User

Here we will introduce the Oracle database optimization solutions and practices. Different environments may have different debugging procedures,
However, there will also be differences, and we hope you can reasonably absorb them. I. Preface 2. ORACLE Database optimization Overview 1. Optimization of parameter configurations such as memory 2. Optimization of physical read/write reduction 3. Optimization of SQL statements for batch repeated operations and large table operations 2. ORACLE database Optimization Scheme 1, memory and other Oracle system parameter configuration www.2cto.com 2, use index 3, Table partition 4, Procedure optimization 5, other transformation 6, maintenance job plan 3, comparison before and after ORACLE Database Optimization 1. Repeated SQL statement execution performance in batches 2. Statement execution performance of some single and uncommon operations 4. Reference 1. Common optimization tools 2. References www.2cto.com 1. Preface start the actual project, the Oracle Database Used in the actual project has been running for a period of time. The amount of data stored online and processed by the business is gradually increasing, there is a certain gap with the actual running performance, and some optimization adjustments are needed. Based on my actual maintenance experience, this article proposes some Optimization Methods for actual project data processing for your reference. Applicable to Oracle 9i. Ii. Overview of Oracle Database optimization the optimization of Oracle databases. for different applications, there will be different optimization methods with different emphasis. According to the application characteristics of our actual project,
We are mainly concerned about the duration of execution of each transaction. From the features of the Oracle database, we can divide the optimization work into initial optimization settings and micro-optimization. During initial optimization settings, we can only estimate the running status of the business based on hardware conditions and give an experience setting based on the overall experience. In general, this kind of experience setting is not far from meeting the optimization needs. After completing the initial optimization settings, you can collect performance data of the actual running environment after a period of business operation, you can evaluate the performance of various Oracle performance indicators and transaction operations of interest, and then perform minor optimization. Www.2cto.com Oracle optimization is neither a one-stop task nor a permanent task. It requires regular maintenance and tb regular observation to promptly adjust the performance bottleneck. Oracle always has a performance bottleneck, and databases that are not used or operated are always the fastest. After resolving the current bottleneck, there will always be another bottleneck, so before optimization, we need to determine an optimization goal. Our goal is to meet our application performance requirements. Oracle optimization involves a wide range of features, including host performance, memory usage performance, network transmission performance, and SQL statement execution performance. For network administrators, the performance of meeting the transaction execution speed is mainly manifested in: 1) execution performance of batch repeated SQL statements (mainly through Procedure computing to complete data merging and Data Summary performance and the performance of batch data collection and warehousing); 2) statement execution performance of some single and infrequently used operations (mainly irregular GUI operations ). Based on these two features, we can summarize the optimization methods into three important directions: 1) Optimization of parameter configurations such as memory. Memory optimization is the fastest way to benefit performance. 2) reduce physical read/write optimization. The memory logic I/O operation time is much less than the physical I/O operation time. 3) Optimize SQL statements for repeated batch operations and large table operations. Reduce the number of SQL executions and the number of large table operations. The following focuses on the optimization of the three most beneficial directions. 1. Optimization of parameter configurations such as memory is certainly the most direct and fast optimization benefit for most applications. Allocate a reasonable size to each Oracle memory block to effectively use the database. By observing the hit rate and execution status of various database activities in the memory, we can quickly grasp the main bottlenecks of the database. We can see the execution steps of the following SQL statement. An SQL statement, from release to execution, goes through the following steps in sequence: www.2cto.com 1) Oracle converts the SQL characters into their ASCII equivalent numeric codes. 2) the ASCII code is sent to a hash algorithm to generate a hash value. 3) The user server process checks whether the hash value exists in the shared pool memory block. If yes: 4) use the cached version in the shared pool for execution. If the statement does not exist: 4) Check the semantic correctness of the statement. 5) execute Object Parsing (check the correctness of the name and structure of the referenced object against the data dictionary during this period ). 6) Check the data dictionary to collect statistics on all objects referenced by this operation. 7) Prepare the execution plan and select an execution plan from the available execution plan. (Including decisions on the use of stored outline and materialized view) 8) Check the data dictionary to ensure the security of the referenced object. 9) generate a compilation code (P-CODE ). 10) execute. Here, through reasonable memory allocation and reasonable parameter settings, we mainly solve the following problems: 1) reduce the possibility of execution to Step 5 and save the time for SQL statement parsing. The execution process after Step 5 is a resource-consuming operation process. 2) use the memory configuration to complete all the operations and operations performed by SQL statements in the memory. As we all know, the speed of reading data from the memory is much faster than that of reading data from the physical hard disk. The memory sorting is much faster than that of the hard disk. 3) reduce the response time of each memory block activity based on the database memory activity, and make full use of each memory block to reduce the number of memory latch contention. 2. Optimization of reducing physical read/write: www.2cto.com no matter how to configure the Oracle database, our network management system will periodically process new data every hour, and physical read/write will happen, this cannot be avoided. To reduce physical read/write optimization, the general methods are as follows: 1) increase the size of the memory data buffer so that the data operated by the database can be found in the memory as much as possible, without the need for physical read/write operations. 2) Avoid unnecessary full table scans by using indexes. 3) physical partitions of large tables. Oracle has a good partition recognition function to reduce the data scanning range. The above three methods are the three most obvious ways to improve the physical I/O performance of the database as a whole. It can quickly reduce the physical I/O of the database. The most direct response is that the execution time of database transactions can be reduced by an order of magnitude. Other optimization methods to reduce physical read/write, such as using materialized view and Cluster, and some methods to distribute I/O, for example, Oracle log files and data files are not stored on a physical hard disk, and the physical I/O of Data hotspot files are separated. Currently, our network management system does not have obvious results, in the network management system, it is not recommended to avoid increasing the complexity of database maintenance. 3. SQL statements for batch repeated operations and SQL statements for large table operations optimized for batch repeated execution usually appear in the insert statements for batch data warehouse receiving in each cycle, periodically select, delete, and insert operations for data merging and summary. We need to pay attention to the following points: 1) Reduce the number of unnecessary SQL statement execution and SQL statement execution times. Every SQL statement execution consumes system resources and has execution time. Reduce Unnecessary SQL statement execution and reduce the number of SQL statement executions, which can naturally reduce the business execution time. You need to redesign the data processing code according to the business process. This method is mainly applicable to data merging and summary executed by procedure. 2) since each SQL statement must be executed many times, try to make the hash value of the SQL statement exist in the shared pool memory block. That is to say, dynamic SQL is used to avoid SQL hard parsing. You can use Oracle parameter settings, dynamic SQL statement applications, and variable binding to reduce the number of SQL statement resolutions. 3) Reduce the operations on large tables and ensure that the same operations are performed only once in a single transaction. When pprocedure and data collection for data merging and data aggregation appear 3. Oracle Database optimization solution 1. Oracle system parameters such as memory are configured with the parameter of www.2cto.com Oracle, static parameters can take effect only after the database is restarted. Dynamic parameters do not need to be restarted. Oracle 9i can use the spfile feature and use the alter system set parameter name = parameter value scope = both [spfile]; Method to modify it. You can also directly modify pfile. The following provides the initial optimization settings for parameter, which is the focus of Oracle database management. Maximum memory SGA total static parameter sga_max_size = physical memory size minus 1.5G Shared pool dynamic parameter shared_pool_size = 600 ~ 800 M static parameter shared_pool_reserved_size = 300 M dynamic parameter open_cursors = 400 ~ 600 static parameter cursor_space_for_time = TRUE static parameter session_cached_cursors = 60 ~ 100 dynamic parameter cursor_sharing = SIMILAR Data buffer dynamic parameter db_cache_advice = READY dynamic parameter db_cache_size dynamic parameter Db_keep_cache_size Dynamic Parameter bytes (sga_max_size, except for the size allocated to all non-data buffer, are allocated to data buffer) sga other memory dynamic parameter large_pool_size = 50 M static parameter java_pool_size = 100 M dynamic parameter log_buffer = 3 M Other memory www.2cto.com dynamic parameter sort_area_size = 3 M static parameter priority = 0.5 M static parameter priority = 800 M dynamic parameter workarea_size_policy = AUTO disk I/O static parameter SQL _trace = FALSE dynamic parameter timed_statistics = true dynamic parameter duration = 16 static parameter dbwr_io_slaves = 0 static parameter db_writer_processes = 3 static parameter undo_management = AUTO dynamic parameter undo_retention = 7200 2. We initially defined the index, indexes are required for tables with more than 1000 rows of data. (Do not distinguish the proportion of transaction operation data in table data) The index contains no more than four fields. Check whether an index is used in the SQL statement. execute plan is used to obtain the explain method. Run the SQL * PLUS tool and run the following command to check the description: setautotraceonsetautotracetraceonlyexplain settimingon or use SQL * PLUS trace to view the trace file in user_dump_dest and format it with tkprof. Www.2cto.com altersessionsetevents '10046tracenamecontextforever, level12'; altersessionsetevents '10046tracenamecontextoff '; SELECTp. spid, s. usernameFROMv $ sessions, v $ processpWHEREs. audsid = USERENV ('sessionid') ANDs. paddr = p. addr; 3. Table partitions are in the network management database. The prominent big tables include the cell table and alarm table. Performance Table, using range partitions. Use start_time as the range partition field. Alarm table, which uses a range-hash hybrid partition and range partition. Range partitions use starttime as the partition field, and hybrid partitions add hash subpartitions with ALARMNUMBER as the field. Create a local partition index. 4. Procedure optimization 1) cancel the city-level Procedure, only keep the upper-level Procedure calls, and keep the parameter input method unchanged. 2) Make sure that only one big table data query operation is performed, and that only one big table data is deleted. 3) Make sure that the execution of a single SQL statement is optimized. 4) Reduce the number of SQL executions. 5. Modify Table Storage parameters for other transformations and assign extents in advance. Modify the table space storage parameters (set the block used to collect the table space as a large block, for example, a 32 K block; Modify ptcfree, pctused, and pctincrease ). Avoid using unique indexes and non-empty constraints. Create a proper index. The SQL statements of each module are optimized, for example, fixed indexes are prompted. Confirm that the delete statement for each historical data has been optimized and deleted. Use of temporary tables. Www.2cto.com 6. Maintain the analysis of the job plan table (including the analysis method, partition table analysis method, and index analysis method ). Space recycling and maintenance (including determining HWM, recycling excess blocks allocated to tables, and merging data block fragments ). Index maintenance (including regular index reconstruction and index usage monitoring ). Check the deletion of historical data (check whether the stored data meets the requirements and whether the deletion method of historical data is correct-for example, the batch deletion and submission method ). Full-database performance analysis and problem reporting and optimization (for example, using statspack for performance trend analysis, checking problematic SQL or transactions, and determining the top 5 events waiting for the current system ). Table Data keep, default, and reclye (for example, some common configuration tables are fixed in the memory ). Check database parameters (to prevent database parameters from being modified, and regularly compare system configuration parameters ). Log File Analysis (regularly checks log files generated by Oracle and regularly backs up and deletes them ). Disk Space Maintenance (regularly monitors the space used by Oracle Objects ). Iv. comparison before and after Oracle Database optimization 1. execution performance of SQL statements that are repeated in batches varies depending on the number of network elements. The computing performance of Procedure for data merging and aggregation is collected periodically through statspack. We can use the following statement to calculate the execution of Procedure we want to calculate: www.2cto.com SELECTTO_CHAR (sn. snap_time, 'yyyy-mm-ddhh24: mi: ss') ASsnap_time, s. disk_reads, s. buffer_gets, s. rows/1000000 rows FROM (SELECThash_value, SQL _text, address, last_snap_id FROMSTATS $ response = comment '% & sqltext_key %') t, (SELECTaddress, hash_value, snap_id, SQL _text, disk_reads, executions, Buffer_gets, rows_processed, elapsed_time FROMSTATS $ SQL _SUMMARY) s, STATS $ SNAPSHOTsn WHEREs. hash_value = t. hash_value ANDs. address = t. address ANDs. snap_id = t. last_snap_id ANDsn. snap_id = s. snap_id; for example, if we run the SQL statement with the perfstat user and input "to_comp", we can observe the execution time of the to_comp stored in the database. We find that the execution time is, from the thousands of seconds before optimization, the final stability is several dozen seconds after optimization. Note: to_comp is used to merge and aggregate procedure data of all network elements at a time. Procedure used for cell analysis data uses the above method for computing performance. We can also know that the procedure execution of cell analysis takes about several thousand seconds before optimization, the final stability is several dozen seconds after optimization. Batch data collection and storage performance uses bcp, which can be reduced from about 15 minutes to about 4 minutes. 2. The execution performance of some single and infrequently used statements: Performance Data Query on GUI, alarm data query, and extremely fast response time, almost no longer waiting for response for a long time. Www.2cto.com 5, refer to the commonly used optimization tool statspack SQL * plus TOAD author chen11-1

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.