Causes and improvements of a large number of redo generation problems

Source: Internet
Author: User
In fact, in the end, a large number of redo logs are generated in a short period of time, resulting in frequent log switching. As a result, archiving occupies a large amount of space and cannot log on. At this level, I

In fact, in the end, a large number of redo logs are generated in a short period of time, resulting in frequent log switching. As a result, archiving occupies a large amount of space and cannot log on. At this level, I

Next, I shared the reason why the database cannot be logged on.
In fact, in the end, a large number of redo logs are generated in a short period of time, resulting in frequent log switching. As a result, archiving occupies a large amount of space and cannot log on. At this level, we can do some work to keep the recent archive for as long as possible, but we can also change our mind to see what operations generate a large number of redo operations, can you try to reduce the redo generation.
In general, this problem is a bit silly. Logs must record as complete information as possible. This is the basis for data recovery. We should not draw conclusions too early. Let's analyze it and make a decision.
Check the redo switching frequency of the database. In the last few days, the redo switching frequency is extremely high, which is a very high load for An OLTP system, I have encountered this frequent log switching only in some data migration scenarios.

However, the strange thing is to view the database time, but it seems that this value is not very high. It seems that there are some contradictions. From this point of view, the frequency of data changes in the database is not very high.
BEGIN_SNAP END_SNAP SNAPDATE DURATION_MINS DBTIME
-----------------------------------------------------
82560 82561 05 Sep 2015 30 26
82561 82562 05 Sep 2015 30 26
82562 82563 05 Sep 2015 29 29
82563 82564 05 Sep 2015 30 27
82564 82565 05 Sep 2015 30 23
82565 82566 05 Sep 2015 30 23
82566 82567 05 Sep 2015 30 20
82567 82568 05 Sep 2015 22
82568 82569 05 Sep 2015 30 20
82569 82570 05 Sep 2015 30 25
82570 82571 05 Sep 2015 30 23
82571 82572 05 Sep 2015 30 27
82572 82573 05 Sep 2015 30 40
82573 82574 05 Sep 2015 30 26
82574 82575 05 Sep 2015 30 28
82575 82576 05 Sep 2015 30 34
82576 82577 05 Sep 2015 29 40
82577 82578 05 Sep 2015 30 37
82578 82579 05 Sep 2015 30 40
82579 82580 05 Sep 2015 30 38
82580 82581 05 Sep 2015 30 41
82581 82582 05 Sep 2015 30 40
82582 82583 05 Sep 2015 30 37
82583 82584 05 Sep 2015 30 39
82584 82585 05 Sep 2015 30 41
82585 82586 05 Sep 2015 30 34
82586 82587 05 Sep 2015 30 53
82587 82588 05 Sep 2015 30 82
82588 82589 05 Sep 2015 30 74
82589 82590 05 Sep 2015 30 45

In this case, we will capture an awr report.
In the awr report, we can see that the bottleneck is mainly in db cpu and IOsh.

Top 5 Timed Foreground Events

EventWaitsTime (s) Avg wait (MS) % DB timeWait Class

Database CPU 2,184 68.89

Db file parallel read6, 0964136813.02 User I/O

Log file sync65, 199363611.47 Commit

Db file sequential read46, 03817245.43 User I/O

Direct path read415, 6854601.47 User I/O

View the time model, and you can see that the CPU and SQL of the DB account for the main proportion.
Seeing this, I am still a bit confused. The problem of Citrus is still a bit strange. The focus of attention is the SQL statement, but the top 1 SQL statement is still a bit strange.

Elapsed Time (s) ExecutionsElapsed Time per Exec (s) % Total % CPU % IOSQL IdSQL ModuleSQL Text

931.7314, 4090.0629.3999.770.00JDBC Thin Clientupdate sync_id set ma...

The execution frequency of this statement is very high, and the statement is very simple, but the CPU consumption is very high. It is suspected that the full table scan is adopted.
The statement is as follows:
Update sync_id set max_id =: 1 where sync_id_type =: 2
Simply look at the execution plan and find that the full table scan is indeed taken. If this problem occurs, the first thing we feel is that we need to go through the index. We can create an index without an index, however, when I see the SQL by Executions section, I feel that the problem is actually not that simple.
We can see that the 2nd statements are actually the top 1 SQL statements just mentioned. The corresponding indicators are still very unusual. The number of rows processed at a time is nearly 5000 degrees, and more than 10 thousand times are executed, the number of data lines processed is nearly million.

ExecutionsRows ProcessedRows per ExecElapsed Time (s) % CPU % IOSQL IdSQL ModuleSQL Text

14,68414, 6841.003.3994.7.7JDBC Thin Clientupdate sus_log set failed_c...

14,40978, 329,3325, 436.14931.7399.80JDBC Thin Clientupdate sync_id set ma...

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.