1. Understand When logws write redo logs into the redo log file in the redo log buffer to understand commit;
- Write when redo log buffer 1/3 is full
- Write at commit
- When a log switch occurs, write
- Write when the generated redo is 1 MB
- When DBWN is to be written, write redo first.
- Write every 3 seconds
That is to say, redo writes log files in a relatively continuous manner. Therefore, no matter how many redo records a transaction generates, the work done in commit is similar, because the redo logs of large departments have been written into log files.
2. What happened during commit?
- Write the SCN number to the log file
- LGWN writes the remaining redo to the log file
- All locks are released
- To clear blocks. Block Clearing refers to clearing the transaction information stored in this block.
Therefore, in fact, there is very little work done each time a commit is made. The biggest task is to write the redo into the log file, but the redo has been written to the log file by many departments. But this does not mean that, each time you modify a block, you must go to commit to increase competition for log files (log files are a shared structure ), it also adds competition for latches (protecting access to the shared structure ). Therefore, we should decide when to commit based on the size of the logical transaction.
3. In the following example, a redo of different sizes is generated to indicate that during the submission, the work time is almost the same.
S1: create a large table
- Scott @ WISON>Create TableTAs Select*FromAll_objects;
- TableCreated.
- Scott @ WISON>
S2: Create a table t_10 that needs to insert 10 rows of data and set automatic tracing.
- Scott @ WISON>Create TableT_10As Select*FromTWhere1 = 0;
- TableCreated.
- Scott @ WISON>SetAutotraceOn Statistics;
- Scott @ WISON>SetTimingOn
- Scott @ WISON>Insert IntoT_10Select*FromTWhereRownum <= 10;
- 10RowsCreated.
- <Strong> Elapsed: 00:00:00. 06 insert time
- </Strong>
- Statistics
- ----------------------------------------------------------
- 451 recursive cballs
- 56 db block gets
- 337 consistent gets
- 1 physical reads
- & Lt; strong & gt; 7284 redoSize</Strong>
- 919 bytes sent via SQL * NetToClient
- 1018 bytes encoded ed via SQL * NetFromClient
- 4 SQL * Net roundtripsTo/FromClient
- 2 sorts (memory)
- 0 sorts (disk)
- 10RowsProcessed
- Scott @ WISON>Commit;
- CommitComplete.
- Elapsed: 00:00:00. 00
S3: insert row 1000
- Scott @ WISON>Insert IntoT_10Select*FromTWhereRownum <= 1000;
- 1000RowsCreated.
- Elapsed: 00:00:00. 02
- Statistics
- ----------------------------------------------------------
- 39 recursive cballs
- 143 db block gets
- 180 consistent gets
- 7 physical reads
- & Lt; strong & gt; 104880 redoSize
- </Strong> 919 bytes sent via SQL * NetToClient
- 1020 bytes encoded ed via SQL * NetFromClient
- 4 SQL * Net roundtripsTo/FromClient
- 1 sorts (memory)
- 0 sorts (disk)
- 1000RowsProcessed
- Scott @ WISON>Commit;
- CommitComplete.
- <Strong> Elapsed: 00:00:00. 12 </strong>
- Scott @ WISON>
Insert row 10000
- Scott @ WISON>Insert IntoT_10Select*FromTWhereRownum <= 10000;
- 10000RowsCreated.
- Elapsed: 00:00:00. 10
- Statistics
- ----------------------------------------------------------
- 487 recursive cballs
- 1808 db block gets
- 596 consistent gets
- 121 physical reads
- & Lt; strong & gt; 1102708 redoSize
- </Strong> 919 bytes sent via SQL * NetToClient
- 1021 bytes encoded ed via SQL * NetFromClient
- 4 SQL * Net roundtripsTo/FromClient
- 1 sorts (memory)
- 0 sorts (disk)
- 10000RowsProcessed
- Scott @ WISON>Commit;
- CommitComplete.
- <Strong> Elapsed: 00:00:00. 01 </strong>
- Scott @ WISON>
We found that the time for inserting 10 rows, 1000 rows, 10000 rows, and commit is very short.