How to optimize Oracle databases in the 9i Era

Source: Internet
Author: User
Tags what sql
Author: Piner(For reprinting, please indicate the source of this article: Ixdba.com)

Permanent Link: Http://www.ixdba.com/html/y2008/m12/297-oracle-9i-after-performance.html

If you ask an Oracle DBA what is the most difficult to learn about Oracle, it is estimated that the optimization will be answered in. Indeed, Oracle optimization is a profound learning, not only to have a profound understanding of the Oracle architecture and operating mechanism, but also to have a thorough understanding of the hardware, OS, and other peripheral environments. Advanced optimizers also have rich programming and application experience, which can be optimized by combining Oracle with applications.

To facilitate the optimization of Oracle databases, Oracle has launched statspack since 817, a widely used optimization tool by DBAs. In my career, statspack has always been my main helper. Its intuitive data statistics have brought me great convenience. However, as the database grows and the environment becomes more and more complex, statspack also begins to show its shortcomings. For example, in the environment of dozens of databases, every database reads the statspack report, the DBA is exhausted.

If statspack is the most powerful Oracle optimization tool in the 8i-9i era, in the later 9i era (10g-11g), as the system grows and the environment becomes more and more complex, how should we tune it? What will the new monitoring tool look like? They should have the following features:

Centralized graphical management, monitoring and optimization

Automatic and real-time alarm and Processing

Comparison and analysis of historical data

With this new demand, various graphical tools for centralized management and monitoring emerged, such as Veritas i3 and quest central. Even many companies (such as our team) have developed some centralized graphical management and monitoring interfaces to work with statspack. Of course, Oracle has also seen the potential problems and future market. Oracle Enterprise Manager (OEM) is the product of this situation.

If the 9i OEM is still a chicken rib, the OEM from 10g is much better. In my previous blog:Oracle 11g em-return to graphical interfaceIt also describes the powerful centralized management functions of the existing OEM + Grid Control, which greatly reduces the management threshold and management costs of Oracle databases.

In fact, OEMs starting from Oracle 10 Gb not only have convenient and centralized management functions, but also their monitoring and optimization functions cannot be underestimated. Through OEM, we can easily see more database performance indicators through the graphical interface, and can diagnose database problems and provide suggestions, you can even optimize the SQL statements with performance problems. Therefore, for DBA, OEM will be a very important performance diagnostic tool starting from 10 Gb, so that a DBA does not even need to have advanced skills, move the mouse, you can complete a difficult optimization task before 10 Gb. With grid control, DBAs can easily manage and optimize a large number of Oracle databases.

Here I am not going to introduce the powerful centralized management functions of OEM, but to explain how to track historical problems and optimize SQL through OEM IN THE 9i era through a small case. Before formally analyzing the case, let's familiarize ourselves with the following terms:

AWR:The automatic load information library (automatic workload repository), from Oracle 10 Gb, also uses AWR to replace the previous statspack. Compared with statspack, AWR reports do not require configuration installation. By default, it is integrated and installed, and no script is required to collect and delete information regularly. By default, AWR information is collected every hour in Oracle, the AWR information of the last week is retained.

ADDM:Autometic database diagnostic monitor allows AWR to periodically collect detailed performance-related metrics from the database. After each snapshot, ADDM is called to thoroughly analyze the data and measurement standards of the differences between the snapshots, and then provide suggestions for necessary operations.

Ash:As a supplement to ADDM, ash dynamically collects current key performance data and stores it in the ash buffers of the Shared Pool, is introduced to retain detailed historical information about recent session activities. With Ash, you can easily analyze recent SQL and session information. In this case, the data is based on Ash.

DB time:This is also a new concept that emerged from 10 Gb, indicating all the time spent by a request in the database, including CPU time, Io time, and non-idle waiting time. However, DB time is not equal to response time. DB time reflects the overall time consumed by the database. Whether it is a CPU issue, I/O issue, or other wait events, there may be room for optimization.

Average active sessions:The average number of active sessions. This is a very round-robin concept, indicating the cumulative usage of active sessions during the current statistical period. For a single process, this usage is a percentage (% active), indicating a ratio of DB time/overall time; for multiple processes, the accumulation of these processes, if the value is greater, there are many active sessions, or the active session is very busy.

Next, let's start the analysis. Assume that I (DBA) received a report saying that a database with 10 Gb oracle had been slow for just a few minutes, but now it has recovered to normal, I need to find the cause. Of course, I can make a report through statspack or AWR. But now, I choose to log on to the OEM. After logging in, I select the performance tag. The following figure is displayed:

We can see that at around, there was an abnormal spike compared to other time periods. The average number of active sessions reached about 20, but generally the number of active sessions could not reach 5. In addition, the host process analysis (as shown in the figure above) and DB Io and transaction are not significantly changed. It should indicate that the current session has suddenly become busy. Click top activity (below the page) to go to the top activity page.

On this page, I can drag the shadow box to the specified position, that is, the time point for analysis, the following shows all top SQL statements and top sessions within the time range in real time. I can click any SQL ID or session ID to learn more. Here, I click the top 1 SQL id to see what SQL is the most performance-consuming SQL:

There are also several tabs. Different tabs can view different SQL information, such as statistics, activity information, and expected execution plan optimization information. In the activity tag, I see the specific SQL activity information. Through the plan tag, I can see the specific execution plan during execution and determine whether the execution plan is wrong. You can even use the SQL Tuning Advisor to provide specific optimization suggestions, such as adding a suitable index.

The preceding small case demonstrates the powerful optimization functions in the Post-9i era of Oracle through the Oracle Web-Based Graphics Management Tool-OEM. In fact, as mentioned above, OEM functions are far more than that. It can be said that OEM after Oracle 9i is a good news for many DBAs.

However, to be truly proficient in optimization, it cannot be solved by a complete tool. In the end, the tool is only a tool. At most, it is a good thing to facilitate management, save costs, and give us a little more time to rest. As for the DBA's skills, it cannot be expected to be improved through tools, you still need to be trained by DBAs.

Finally, powerful monitoring only allows you to discover and solve problems in a timely manner. Why are problems occurring and how to eliminate them in the bud? This is what every DBA needs to think about:

How should we reasonably and effectively plan and design our database to avoid unnecessary design mistakes and unexpected bugs.

What procedures should we use to review developers' queries and DBA's daily operations to avoid various operation errors.

What documents and specifications should we use to standardize daily DDL and DML operations so that each operation can be followed?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.