Oracle Redo log parallel mechanism (1)

Source: Internet
Author: User

Oracle Database logs are called Redo logs. All data changes are recorded in Redo logs, which can be used to repair damaged databases. Redo log is an important data used for restoration and an advanced feature. A redo log contains all the information about database changes caused by the corresponding operations, all redo entries will eventually be written to the redo file.

Redo log buffer is a piece of memory allocated in sga to avoid performance bottleneck caused by Redo file IO. A redo entry is first generated in the user memory (PGA) and then copied to the log buffer by the oracle service process. when certain conditions are met, the LGWR process writes the redo file. Because log buffer is a piece of "shared" memory, in order to avoid conflicts, it is protected by redo allocation latch. Each service process needs to obtain the latch before allocating redo buffer. Therefore, in oltp systems with high concurrency and frequent data changes, we can usually observe the waiting time of redo allocation latch. The entire Redo buffer write process is as follows:

Produce Redo Enrey in PGA-> service process obtains Redo Copy latch (multiple --- CPU_COUNT * 2)-> service process obtains redo allocation latch (only one) -> allocate log buffer-> release redo allocation latch-> write Redo Entry into Log Buffer-> release Redo Copy latch;

Shared strand

In order to reduce the redo allocation latch wait, the parallel mechanism of log buffer is introduced in oracle 9.2. The basic principle is to divide the log buffer into several small buffers, which are converted into strand (to distinguish them from the private strand that appears later, they are called shared strand ). Each strand is protected by a separate redo allocation latch. The emergence of multiple shared strand makes the originally serialized redo buffer allocation a parallel process, thus reducing the redo allocation latch wait.

The initial data volume of shared strand is controlled by the parameter log_parallelism. In 10 Gb, this parameter becomes an implicit parameter, and the parameter _ log_parallelism_max is added to control the maximum number of shared strand; _ log_parallelism_dynamic controls whether to allow the number of shared strand to dynamically change between _ log_parallelism and _ log_parallelism_max.

 
 
  1. HELLODBA.COM>select  nam.ksppinm, val.KSPPSTVL, nam.ksppdesc      
  2.   2  from    sys.x$ksppi nam,      
  3.   3          sys.x$ksppsv val      
  4.   4  where nam.indx = val.indx      
  5.   5  --AND   nam.ksppinm LIKE '_%'      
  6.   6  AND   upper(nam.ksppinm) LIKE '%LOG_PARALLE%';      
  7.      
  8. KSPPINM                    KSPPSTVL   KSPPDESC      
  9. -------------------------- ---------- ------------------------------------------      
  10. _log_parallelism           1          Number of log buffer strands      
  11. _log_parallelism_max       2          Maximum number of log buffer strands      
  12. _log_parallelism_dynamic   TRUE       Enable dynamic strands  

The size of each shared strand = log_buffer/(number of shared strand ). The strand information can be found in table x $ kcrfstrand (including shared strand and the private strand introduced later, which exists after 10 Gb ).

 
 
  1. HELLODBA.COM>select indx,strand_size_kcrfa from x$kcrfstrand where last_buf_kcrfa != '00';      
  2.      
  3.       INDX STRAND_SIZE_KCRFA      
  4. ---------- -----------------      
  5.          0           3514368      
  6.          1           3514368      
  7.      
  8. HELLODBA.COM>show parameter log_buffer      
  9.      
  10. NAME                                 TYPE        VALUE      
  11. ------------------------------------ ----------- ------------------------------      
  12. log_buffer                           integer     7028736    

For the number of shared strand instances, the maximum value of 16 CPUs is 2 by default. When there is a redo allocation latch wait in the system, you can consider adding one strand for each increase of 16 CPUs, the maximum value cannot exceed 8. And _ log_parallelism_max cannot be greater than cpu_count.

Note: In 11g, the parameter _ log_parallelism is canceled. The number of shared strand is controlled by _ log_parallelism_max, _ log_parallelism_dynamic, and cpu_count.

Private strand

To further reduce redo buffer conflicts, a new strand mechanism-Private strand is introduced in 10 Gb. Private strand is not divided from log buffer, but a piece of memory space allocated in the shared pool.

 
 
  1. HELLODBA.COM>select * from V$sgastat where name like '%strand%';  
  2.  
  3. POOL NAME BYTES  
  4.  
  5. ------------ -------------------------- ----------  
  6.  
  7. shared pool private strands 2684928  
  8.  
  9. HELLODBA.COM>select indx,strand_size_kcrfa from x$kcrfstrand where last_buf_kcrfa = '00';  
  10.  
  11. INDX STRAND_SIZE_KCRFA  
  12.  
  13. ---------- -----------------  
  14.  
  15. 2 66560  
  16.  
  17. 3 66560  
  18.  
  19. 4 66560  
  20.  
  21. 5 66560  
  22.  
  23. 6 66560  
  24.  
  25. 7 66560  
  26.  
  27. 8 66560  
  28.  
  29. ... 

The introduction of Private strand has greatly changed the Redo/Undo mechanism of Oracle. Each Private strand is protected by a separate redo allocation latch. Each Private strand serves only one active transaction as a "Private" strand. User transactions that obtain Private strand are not in PGA but generate Redo in Private strand. When flush private strand or commit is used, Private strand is written to log files in batches. If the new transaction cannot apply for the redo allocation latch of Private strand, the old redo buffer mechanism will be followed and the application will be written to shared strand. Whether the transaction uses Private strand can be identified by the new 13th bits of the ktcxbflg field of x $ ktcxb:

 
 
  1. HELLODBA.COM>select decode(bitand(ktcxbflg, 4096),0,1,0) used_private_strand, count(*)  
  2.  
  3. 2 from x$ktcxb  
  4.  
  5. 3 where bitand(ksspaflg, 1) != 0  
  6.  
  7. 4 and bitand(ktcxbflg, 2) != 0  
  8.  
  9. 5 group by bitand(ktcxbflg, 4096);  
  10.  
  11. USED_PRIVATE_STRAND COUNT(*)  
  12.  
  13. ------------------- ----------  
  14.  
  15. 1 10  
  16.  
  17. 0 1 

For transactions that use Private strand, you do not need to apply for Redo Copy Latch or the redo allocation latch of Shared Strand, but flush or commit is used to write data to disks in batches, therefore, the number of Redo Copy Latch and redo allocation latch application/release requests and latch waits are reduced, reducing the CPU load. The process is as follows:

Transaction start-> apply for redo allocation latch of Private strand (if the application fails, apply for redo allocation latch of Shared Strand) -> Generate Redo Enrey in Private strand-> Flush/Commit-> apply for Redo Copy Latch-> the service process writes Redo entries to Log files in batches-> release Redo Copy Latch-> release private strand redo allocation latch.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.