Just beginning to learn Oracle, record their own study notes, if there are errors, but also hope that you have a lot of advice.
first, a relatively important figure in the Oracle architecture, as follows 650) this.width=650; "Src=" http://s4.51cto.com/wyfs02/M01/77/3A/ Wkiom1zlzogaw9xoaak_uw3mxg0089.jpg "title=" 912c7cea6fb04838bde18c50f24164c6.jpg "alt=" wKiom1ZlZOGAw9XOAAK_ Uw3mxg0089.jpg "/> When we enter a simple command, such as the first time you enter the update table_name T set t.a=30; When we execute this SQL, our SQL now share The row cache in the Dictionary cache (data dictionary buffer) in pool (shared pool) performs semantic parsing and will generate a relatively well-deserved execution plan in the library cache (cache) after analysis. This action record is also cached in redo log buffer, and the updated data will be updated to the dirty block of datebase Buffer cache, and when we click Commit, the LGWR will be triggered and the log buffer will be written to the log file. However, the commit does not trigger the DBWN, will not write the corresponding update data into the DBWN, and this time the database is down, data Dictionary cache updates will be lost, when we are through redo Log Files restore the cached operations record in redo Log buffer via Smon, and the Date files and control files restored the contents of the database buffer cache. Below is a slightly detailed introduction to the background of several process-related functions and the corresponding trigger mechanism:
LGWR Logging Write process (log writer)
Lgwr
The log write process is responsible for writing log entries for the redo log buffers to the online redo log files on disk.
When a DML or DDL statement is run, the server process first records the changes in the transaction to the redo log buffer before it writes the data cache
The redo log buffer will be written to the online redo log file to avoid accidental data loss (if the operating system
, the contents of the Redo log buffers in memory are lost, and the presence of online log files on the disk is not lost, and this task is done by LGWR.
The redo log buffer is a looping structure that LGWR writes Redo records in the redo log buffer to the online redo log file, in the corresponding buffer area
will be emptied to ensure that Oracle has an idle redo log buffer to write to.
LGWR will begin to work when the following conditions occur:
InDWBRThe process writes dirty buffers before the data file.
Pre-write protocol
The Redo log record reaches One-third of the buffer.
Log buffers record more logs than1M。
Each interval3Seconds.
Redo log buffers are used in a loop, freeing up enough space for new records to be used
Commits a transaction (executes a commit).
Committing a transaction is equivalent to determining that saving changes are not stored in the log file and there is a possibility of loss
In the official documentation, LGWR started working:
A user commits a transaction (see "Committing Transactions" on page 10-10).
The user commits the transaction
An online redo log switch occurs.
Online Redo Log switchover occurs
Three seconds has passed since Lgwr last wrote.
Self -LGWRThe last write is over.3Seconds/Each interval3Seconds
The redo log buffer is One-third full or contains 1 MB of buffered data.
Redo Log Buffers reached1/3Full or cached.1MBof data
DBWN must write modified buffers to disk.
DbwnThe modified buffer must be written to disk
Oracle always records data changes to redo log buffers before modifying the data cache. Similarly, in the background process dbwn the dirty buffer
Before writing to the data file, the redo log buffers are first written to the redo log by the background process LGWR. Redo log compared to data cache
The buffer is much smaller, but the write frequency is much higher, and Oracle must ensure that the redo log buffers always have enough space to accommodate the new transaction, so every 3
LGWR automatically works when the second or redo log buffers have been filled by one-third.
In addition, Oracle uses a fast commit mechanism that, when committing a commit, does not write "dirty buffer" data to the data file, but instead
The contents of the log buffer are written to the redo log file to ensure database integrity. Even if the system has an unexpected situation (such as power-down, system crashes
), because the committed transaction is already documented in an online redo log file that is stored on disk, and the system will be automatically implemented when the database is restarted
example, and writes the transaction modification data to a data file to avoid data loss.
DBWN Database Write process (db writer)
The database write process is responsible for writing the contents of the database cache buffer (dirty buffer) to the data file.
Although there is a database write process (DBW0) for most systems, the database administrator can configure additional processes (DBW0-DBW9, up to
Tenprocess) to improve write performance by setting the initialization parameter db_writer_processes. If your system modifies the data severely,
These additional dbwn processes are not very useful in a single processor system.
When the block of the database cache is modified, it is marked as dirty buffer and added to the SCN (systemchange number, system change numbers, which can be
A list of Lruw (Lruwriter) in order as a "time").
At the same time, this order is consistent with the order of redo log buffers.
The DBWN process begins to work when the following conditions occur:
The system issues a checkpoint command.
Synchronize the data, as detailed in the checkpoint process (CKPT).
The number of dirty buffers reaches the specified threshold.
The free buffer cannot be found after the service process has searched a certain number of blocks.
The data is written to the time elapsed.
When the client executes the Select\insert\update\delete statement, it needs to access the database high-speed buffer. If this is the first visit, you must
To read data from a data file to a database cache, Oracle must ensure that the data cache always has enough "free buffers" to accommodate
Naxin data. When the DBWN process writes a block of dirty buffers to the data file, Oracle marks the "dirty buffer" as a "free buffer". Therefore, for
Guaranteed enough "free buffers" to hold new chunks of data, requiring DBWN processes to work.
The table space is offline or goes into a read-only state.
Performs a delete or TRUNCATE TABLE operation.
ImplementationALTER tablespace ... BEGIN BACKUPCommandAlter Systemflush Buffer_cache/checkpoint
Need to synchronize the data, the principle with the checkpoint.
CKPT checkpoint process (Checkpoint)
The role of the CKPT checkpoint process is to perform a"Check Point", synchronize all data files, control files, and redo log files for the database. When you perform a check
Enumeration, the system urges DBWN to write data in the data buffer to the data file, while the data files and control files are updated, recording the
The structure and state of the former database. After a checkpoint is executed, the database is in a full state. After a database crash, you can restore the database
To the previous checkpoint.
When executing statements involving data changes, the Oracle database generates a sequential increment SCN (systemchange number) value for any modification and
The SCN value is also recorded in the redo log buffer along with the changes in the transaction. The data file, the control file header, and the redo log file are all documented in the
Value. By comparing the SCN values of various files, Oracle determines whether the file is corrupted, the system is abnormal, and ultimately determines if the system requires an instance recovery or a
Quality recovery. The SCN values for the data file, control file, and redo log are exactly the same when the checkpoint is emitted.
Process Ckpt will start working in the following cases:
A log switchover occurs.
Close an instance(SHUTDOWN ABORTExcept)。
Perform checkpoint operations manually.
By initializing parametersLog_checkpoint_intervalAndLog_checkpoint_timeoutMandatory issue.
Smon system Monitoring Process (Systemmonitor)
The primary role of the Smon system monitoring process is to force recovery operations on the database. When the instance starts, if the last database was shut down abnormally and
The SCN values for log files and control files are different, and Oracle will automatically re-open the database by performing a record of the redo log files to
All data files, control files, and redo log files, ensuring consistency of all database files before opening the database.
If the checkpoint process in one case, after the completion of the fourth step of the system power down, crashes, then the data will not be lost?
Of course not. We know that the system loses power, causing data loss in the in-memory data (Database Cache area). So naturally the fifth step in the above example cannot
Completed (Unable to write the data file from the database cache), but the online log file is now written. As a result, the data is now available from the online log text
Update the amount of data, which is naturally determined by the SCN. This process we become "instance recovery". The process does not require a database administrator
Manual intervention is done automatically by the Smon process.
The process is also responsible for cleaning up temporary and merge area (Extent) fragments when launching an instance. So the work of the Smon process is summarized as follows:
Perform instance recovery
Free space for merging data files
Release temporary segment of data file
Pmon Progress monitoring process (Process Monitor)
The Pmon process monitoring process is responsible for recovering a failed user process or service process. When a user process connects to an Oracle server, Oracle will be on the server
The corresponding service process is assigned to the server. This is done by the Pmon process to monitor the execution of the user process. For various reasons, the user's connection to the Oracle database
When a crash, suspend, or abnormal termination occurs, the process is responsible for cleaning up the resources that the service process consumes and rolling back transactions that are not completed.
When Pmon detects a user process failure, the work done is summarized as follows:
Roll back the current user's transaction
To release a table or row-level lock added by the current user
Releasing additional resources for a user
Restart the dead dispatch process
Assuming that we run Sql*plus on the client and access the Oracle server over the network, Oracle assigns the corresponding service process on the server side. If you use
If the user terminates the Sql*plus unexpectedly, or the network disconnects or the client freezes, the Pmon must detect the situation and release the service process
of resources.
650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>
A preliminary understanding of Oracle Architecture (i)