The alert log shows the Private Strand Flush Not Complete processing method, strandflush

Source: Internet
Author: User

The alert log shows the Private Strand Flush Not Complete processing method, strandflush
Alert. log also reports the following error in the database of the customer in Nanjing:
Fri Oct 17 19:59:51 2014 Thread 1 cannot allocate new log, sequence 4722 Private strand flush not complete Current log #1 seq #4721 mem #0: /oradata/sgomp5/redo01.logThread 1 advanced to log sequence 4722 (LGWR switch) Current log #2 seq #4722 mem #0:/oradata/sgomp5/redo02.log
I found an article on this issue in the MOS community:
Historically, Every user session wrote the changes to redo log buffer and changes from redo log buffer are flushed to redo logs on disk by lgwr. as number of users increased, the race and the need to get latch for redo allocation and redo copy on the public redo buffer increased. so, starting from 10g, Oracle came up with concept ofprivate redo (x $ kcrfstrand) and in-memory undo (x $ ktifp ). every session has private redo where session writes to and then a (small) batch of changes is written to public redo and finally from public redo log buffer to redo log files on disk. this mechanic mces ces the gets/sleeps on redo copy and redo allocation latches on the public redo buffer and hence makes the architecture more scalable.
It is also worth noting that oracle falls back to old redo mechanic in case transaction is too big (with lots of changes) and if changes done by that transaction can't fit into private redo buffers.
When the database switches logs, all private strand logs must be refreshed to the current log before continuing. This information indicates that we have not completely written all redo information into the log when trying to switch. This is a bit similar to "checkpoint not complete". The difference is that it only involves the redo that is being written into the log. Logs cannot be switched before writing all redo logs.

Private Strands is only available in 10gR2. It is used to process redo latch (redo allocation latch ). It is a mechanism that allows a process to write redo into the redo buffer cache more efficiently using multiple allocation latch. It is related to the log_parallelism parameter in 9i. The Strand concept is proposed to ensure that the redo generation rate of the instance is the best, and to ensure that the number of strand can be dynamically adjusted for compensation when some redo contention occurs. The initial allocated strand quantity depends on the number of cpus. There are at least two strand, one of which is used for active redo generation.


For large oltp systems, the redo volume is very large, so the strand will be activated when the current process encounters redo contention. Shared strand always coexist with multiple private strand. Oracle's 10 Gb redo (and undo) mechanism has changed significantly to reduce contention. This mechanism does not record redo in real time, But first records it in a private area and flush it to the redo log buffer during commit. After the introduction of this new mechanism, once the user process applies to private strand, redo is no longer saved to pga, so the redo copy latch process is no longer required.


If the new transaction cannot apply for the redo allocation latch of private strand, the old redo buffer mechanism will be followed and the application will be written to shared strand. For this new mechanism, LGWR needs to write shared strand and private strand content when redo is written to logfile. When redo flush occurs, all public strands redo allocation latch needs to be obtained, and all public strands redo copy latch needs to be checked, all private strands containing active transactions must be held.
In fact, you can ignore this phenomenon unless there is a significant difference between the "cannot allocate new log" information and the "advanced to log sequence" information.

If you want. log to avoid Private strand flush not complete events, you can add the value of db_writer_processes, because DBWn will trigger LGWR to write redo to logfile, if multiple DBWn processes are written together, the redo buffer cache can be accelerated to write the redo logfile.


Run the following command to modify the parameter: SQL> alter system set db_writer_processes = 4 scope = spfile; -- this parameter is a static parameter and takes effect after the database is restarted.

Note that the number of DBWR processes should be the same as the number of logical CPUs. In addition, when oracle finds that a DB_WRITER_PROCESS cannot complete the work, it will also automatically increase the number, provided that the maximum allowed value has been set in the initialization parameter.
Descriptions of DB_WRITER_PROCESSES and DBWR_IO_SLAVES parameters:
Revoke replaces the Oracle7 parameter DB_WRITERS and specifies the initial number of database writer processes for an instance. If you use DBWR_IO_SLAVES, only one database writer process will be used, regardless of the setting for DB_WRITER_PROCESSES
The DB_WRITER_PROCESSES parameter is the DB_WRITERS parameter in Oracle 7. It is used to specify the number of DBWR processes of the database instance. When the DBWR_IO_SLAVES parameter is also configured in the system (the default value is 0 ), only one DBWn process can be used, while others are ignored.
DBWR_IO_SLAVESIf it is not practical to use multiple DBWR processes, then Oracle provides a facility whereby the I/O load can be distributed over multiple slave processes. the DBWR process is the only process that scans the buffer cache LRU list for blocks to be written out. however, the I/O for those blocks is saved med by the I/O slaves. the number of I/O slaves is determined by the parameter DBWR_IO_SLAVES.
When a single DBWR process is used, Oralce provides multiple I/O slave processes to simulate asynchronous IO, to complete all the tasks that should be done by DBWR (write data blocks on LRU to disk files), the number of slave is specified by the DBWR_IO_SLAVES parameter.
DBWR_IO_SLAVES is intended for scenarios where you cannot use multiple DB_WRITER_PROCESSES (for example, where you have a single CPU ). i/O slaves are also useful when asynchronous I/O is not available, because the multiple I/O slaves simulate nonblocking, asynchronous requests by freeing DBWR to continue identifying blocks in the cache to be written. asynchronous I/O at the operating system level, if you have it, is generally preferred.
The DBWR_IO_SLAVES parameter is usually used in a single CPU scenario, because a single CPU does not work even if multiple DBWR processes are set. Whether or not the operating system supports asynchronous IO, using multiple I/O slaves is effective and can share the tasks of DBWR. If asynchronous IO is used, we recommend that you set
Dbwr I/O slaves are allocated immediately following database open when the first I/O request is made. the DBWR continues to perform all of the DBWR-related work, apart from memory Ming I/O. i/O slaves simply perform the I/O on behalf of DBWR. the writing of the batch is parallreceived between the I/O slaves.
Dbwr I/O slaves is allocated when the first I/O Request occurs when the database is open. The DBWR process continues to complete its own tasks, some I/O processing tasks are separated to I/O slaves, and I/O processing between I/O slaves is parallel.
Choosing Between Multiple DBWR Processes and I/O slavesindexing ing multiple DBWR processes benefits performance when a single DBWR process is unable to keep up with the required workload. however, before processing ing multiple DBWR processes, check whether asynchronous I/O is available and configured on the system. if the system supports asynchronous I/O but it is not currently used, then enable asynchronous I/O to see if this alleviates the problem. if the system does not support asynchronous I/O, or if asynchronous I/O is already configured and there is still a DBWR bottleneck, then configure multiple DBWR processes.
It is effective to configure multiple DBWR processes when a single DBWR process is not competent for a large number of write workloads. However, before configuring multiple DBWR processes, check whether asynchronous I/O is supported on the OS. If yes, enable it first; if DBWR bottlenecks still exist after the system does not support or asynchronous IO is configured, you can configure multiple DBWR processes.
Using multiple DBWRs parallelizes the gathering and writing of buffers. therefore, multiple DBWn processes shocould deliver more throughput than one DBWR process with the same number of I/O slaves. for this reason, the use of I/O slaves has been deprecated in favor of multiple DBWR processes. i/O slaves shoshould only be used if multiple DBWR processes cannot be configured.
Enabling multiple DBWR processes means that more dirty caches (dirty buffer) can be written to data files in parallel, while the throughput of multiple DBWR is, it is also higher than 1 DBWR + a considerable number of I/O slaves. Therefore, when multiple DBWR processes are enabled, DBWR_IO_SLAVES should not be configured again (if it was originally non-zero). You can set this parameter to 0.
Summary:

DBWR_IO_SLAVES is mainly used to simulate asynchronous environments. It can improve IO read/write speed on OS that do not support asynchronous operations.
Multiple DBWR processes can obtain the dirty block from the data buffer in parallel and write the block to the disk in parallel. However, in a single DBWR + multiple I/O slaves scenario, only one DBWR can be obtained from the data buffer, while multiple I/O slaves can be written in parallel. If the system supports AIO (disk_async_io = true), do not set multiple dbwr or io slaves.

If you have multiple CPUs, we recommend that you use DB_WRITER_PROCESSES. In this case, you do not need to simulate the asynchronous mode, but note that the number of processes cannot exceed the number of CPUs. However, we recommend that you use DBWR_IO_SLAVES to simulate asynchronous mode with only one cpu to improve database performance.





Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.