-------------------------the physical structure of the Oracle database-------------------------------
Oracle Database Physical Structure
Oracle's data is actually saved in the form of a file in which the user's data is saved,
You also need to save management data and log data, and so on. As a DBA, you need to know where your data is stored separately,
and experienced administrators should be in the design of the database when the reasonable plan to save the file location, initial size and increment properties, etc.
This will not only meet the needs of the user application, but also ensure the efficient operation of the database.
Each Oracle data consists of three types of files, control files, data files, and log files.
These files provide the physical storage of real data storage.
1. Control files
Each Oracle database has a control file that holds the physical structure information for the database.
The control file contains information such as the database name, the name of the database data file and the log file, and the date the database was established.
The control files of the database are used to identify the database and log files, which must be opened when the database operation begins.
When the composition of the database changes, Oracle automatically updates the control file for that database. Control files are also required for data recovery.
When the database is open, the Oracle database must write the control file. There is no control file, the database will not be installed in the recovery of the database is also very difficult.
2. Data files
When the data is manipulated, the system reads the data from the data file and stores it in an Oracle memory buffer.
The new or updated data does not have to be written to the data file immediately, but is temporarily put into memory, and the database write process (DBWN) decides to write to the data file at the appropriate time. This can greatly reduce the number of times the disk is accessed, thereby enhancing the performance of the system.
3. log files
Each database has two or more log file groups that are used to collect database logs.
The main function of the log is to record changes made to the data, and all changes made to the database are recorded in the log.
In the event of a failure, if the modification data cannot be permanently written to the data file, you can get the modified record through the log,
This ensures that the results of operations that have occurred are not lost.
The primary purpose of the log file is to prevent the database from losing data in the event of a failure. To prevent the log file itself from failing,
Oracle allows the use of mirrored logs to maintain more than two log copies on different disks.
--------------------------------------------------------------------------------
Redo Log Management
The Redo log (Redo log) consists of more than two files that are used to protect all changes to the database.
Each Oracle DB instance has a related redo log to keep the database secure.
Redo logs consist of redo records, and each redo record consists of a set of changing elements that record the changes in each individual block of data in the database, such as:
When a user modifies a piece of data in a table, the system automatically generates a redo record.
You can use redo logs to recover changes to the database and to protect the rollback data. When recovering a database using redo data,
The database reads the change elements from the redo log and then applies the changes to the relevant data blocks.
The database contains at least two redo logs, a state that is always writable to record changes in the database,
The other one is for the archive operation (when the system's Achievelog mode is turned on).
So how does Oracle record the redo log? The log write process LGWR is responsible for recording the redo log.
If the redo log file is already filled, then LGWR writes the change data to the next redo log file;
If the last valid redo log is filled, then LGWR writes the change data to the first redo log.
To prevent the redo log itself from being corrupted, Oracle provides a multivariate redo log, which means that the system automatically maintains two or more copies of the log in different locations.
From a security standpoint, these copies should be saved on a different disk.
Pluralism is achieved by creating a redo log group, which includes a redo log file and its multivariate copy. Each redo log group is defined by a number, such as Group 1, group 2, and so on. Each log file must be active so that LGWR can write both log files at the same time LGWR does not write to different groups of log files at the same time.
In different cases, when the redo log is invalid, the LGWR lock takes the following actions:
1) LGWR can write to at least one member file in the group: The write operation is completed normally. LGWR writes an accessible member file in the group, ignoring the inaccessible member file.
2) During log switchover, LGWR cannot access the next group because the group needs to be archived: temporarily stop the database operation, wait for the group to be accessible, or the group has been archived.
3) During log switching, all members of the next group cannot be accessed because the media is corrupted: The Oracle database returned an error and the DB instance was shut down.
The Media recovery operation (Database recovery operation) needs to be performed from a valid redo log. If the checkpoint for the database has exceeded the lost redo log,
Media recovery is not required because the data recorded in the Redo log is written to the data file. Now you only need to delete the invalid redo log group.
If the database has not yet been archived for the failed log, execute the ALTER DATABASE CLEAR unarchived log Forbidden archive operation,
This allows you to delete the log file.
4) When Lgwr writes, the member files in all the groups are suddenly inaccessible: Oracle returns an error and the DB instance is closed.
In this case, you need to perform a media restore from a valid redo log file. If the media is not damaged, just accidentally dropped the wire, then no media recovery is required,
Just restore the media online and let the database perform an automatic instance recovery.
You can set the number of redo logs by using the following parameters:
Maxlogfiles: Use the Maxlogfiles parameter in the CREATE DATABASE statement to specify the maximum number of redo log file groups in each database.
Maxlogmembers: Use the Maxlogmembers parameter in the CREATE DATABASE statement to specify the maximum number of log files that are contained in each log file group.
--------------------------------------------------------------------------------
Archive log files and archive modes
The archive log file is a backup of the redo log filegroup members, which consists of a redo item and a unique log sequence number.
When the database is in archive mode, the Write log process (LGWR) is not able to reuse and overwrite non-archived redo log groups.
If auto-archive mode is set, background ARCN will automatically perform the archive operation. The database initiates multiple process archives, ensuring that the log files are archived as soon as they are filled.
You can use the archive log file to achieve the following:
1) Recovering the database
2) Updating the standby database
3) Use Logminer to get historical information about the database
Users can choose to automatically archive or manually archive, but the automatic archiving mode is more convenient and fast.
The LGWR process writes log information to the online redo log file, and once the redo log is full, the ARC0 process archives the operation.
--------------------------------------------------------------------------------
Scn
The SCN is the abbreviation for System change number, which is an important mechanism of the database that can be used to record and identify the sequencing of database operations performed.
The sequencing of database operations is important when performing operations such as rolling back transactions, backing up, and recovering databases.
The SCN is an integer that can only be enlarged, and the system guarantees that the integer is not crossed. When the user modifies the data in the database (add, modify, delete),
Instead of saving it to a data file immediately, the Oracle database saves the data in buffer cache, and when the transaction is committed,
Data will be written into the data file.
Here's a simple procedure for modifying data:
1) Start a transaction, which is a logical unit of work that contains a set of database operations.
2) Looking for the data to be modified in the buffer, if not found, the data is found in the data file and loaded into the data buffer.
3) Modify the data block in the buffer and save the modified result to the log buffer. Because the data in the buffer is inconsistent with the data in the data file,
So this data is called "dirty data".
4) When the user submits the data, the LGWR process writes the data in the buffer and the newly generated SCN to the redo log file.
However, to reduce I/O operations, Oracle does not immediately write dirty data to the data file.
5) If a checkpoint occurs (CheckPoint, the checkpoint is an event, when this event occurs,
The DBWN process writes all the changed database buffers in the SGA to the data file. The CKPT process wakes up the DBWN process,
Updates all the data files and control files in the database, and marks the latest checkpoints so that the next update starts from the latest checkpoint.
--------------------------------------------------------------------------------
The checkpoint event is typically triggered when the shutdown normal and shutdown immediate statements are executed
When the checkpoint event occurs, Oracle writes the SCN to the following four places:
1) Systems checkpoint SCN (System CHECKPOINT SCN)
When a checkpoint action is completed, Oracle saves the SCN of the system checkpoint to the control file, which can be viewed from view v$database to the value of the system checkpoint SCN.
The code is as follows:
SELECT checkpoint_change# from V$database;
2) data file checkpoint SCN (datafile CHECKPOINT SCN)
When a checkpoint action is completed, Oracle saves the SCN for each data file to the control file.
You can see the value of the data file checkpoint SCN from the system view V$datafile.
The code is as follows:
SELECT name,checkpoint_change# from V$datafile;
3) Start SCN (start SCN)
Oracle stores checkpoint SCN for each data file in the file header of each data file, called the boot SCN.
Because when the DB instance is started, Oracle checks that the boot SCN for each database file is consistent with the control file checkpoint SCN.
If it is inconsistent, the lost SCN is found in the Redo log file and is re-written to the data file for recovery.
You can view the value of the boot SCN from the system view V$datafile_header.
The code is as follows:
SELECT name,checkpoint_change# from V$datafile_header;
4) End SCN (STOP SCN)
The end SCN for each data file is saved in the control file. Normally, all data files that are in online read-write mode have an SCN that is null.
You can view the value of the end SCN from the system view V$datafile.
The code is as follows:
SELECT name,last_change# from V$datafile;
The checkpoint event is triggered when the database shuts down normally (executing shutdown normal and shutdown IMMEDIATE).
The data in the Redo log file is written to the data file, and the SCN in the above 4 is updated to the most recent value.
When the Oracle database starts and runs normally, the control file checkpoint SCN, data file checkpoint SCN, each data file in the boot SCN is consistent,
The end SCN for each data file in the control file is null.
When needed, the system automatically generates a new SCN based on the timestamp, which can be viewed from the dual table to the latest SCN generated by the current system, with the following code:
SELECT Dbms_flashback. Get_system_change_number from DUAL;
-----------------------------------------------------------------------------------
Scenario 99: View information about a control file
Scenario 100: Creating an initial control file
Scenario 101: Create a copy of the control file
Scenario 102: Create a new control file
SELECT MEMBER from V$logfile;
SELECT NAME from V$datafile;
Scenario 103: Recovering an empty file
1. The control file is corrupted, but the directory where the control files are stored can still be accessed:
Use the shutdown command to close the DB instance, and then the operating system command copies the control file copy to the control file's directory, and finally uses the startup command to open the DB instance.
2. The storage media is corrupted, causing the directory where the control files are stored to be inaccessible
First close the DB instance, and then use the operating system command to copy the control file copy to a new accessible directory. Modify the Contro_files parameter to modify the invalid control file directory to a new directory. Finally, use the startup command to open the DB instance.
Scenario 104: Deleting a control file
Steps:
(1) Closing the database
(2) Edit the value of the Control_files parameter to delete the specified control file information.
(3) The control file to be deleted is divided into other media, so that after using the operating system command to delete the file.
(4) Restart the database
Scenario 105: Viewing data file information
SELECT name,status,bytes from V$datafile;
Scenario 106: Creating a Data file
Create a table space and create a 50MB data file at the same time
CREATE tablespace mytabs datafile ' C:\Users\Administrator\Desktop\test\mytemfile01.dbf ' SIZE 50M
Create a temporary tablespace, create a temporary 10MB file
CREATE temporary tablespace temtbs tempfile ' C:\Users\Administrator\Desktop\test\mytemp01.dbf ' SIZE 50M entent MANAGEMENT LOCAL;
Using the ALTER TABLESPACE statement to modify the tablespace, you can use the Add datafile keyword to table space store files
ALTER tablespace mytabs ADD datafile ' C:\Users\Administrator\Desktop\test\mydata.dbf ' SIZE 50M;
Scenario 108: Modifying the online status of a data file
To modify an online file to an offline state
ALTER DATABASE datafile ' C:\Users\Administrator\Desktop\test\USERS01. DBF ' OFFLINE;
Specifies that the data file is set to online status
ALTER tablespace hrman datafile ONLINE;
Scenario 109: Deleting a data file
Delete Mytab of tablespace and delete data file
DROP tablespace mytab including CONTENTS CASCADE CONSTRAINTS;
Delete the specified data file
ALTER DATABASE datafile ' C:\Users\Administrator\Desktop\test\MYTAB. DBF ' OFFLINE DROP;
Scenario 110: View Redo log information
View Query View V$logfile
SELECT Group#,status,member from V$logfile;
View V$log_history to display historical information for redo logs
SELECT RECID, first_change#, next_change#, resetlogs_change# from V$log_history;
Scenario 111: Create a redo log composition member
1. Create a log group
ALTER DATABASE ADD LOGFILE (' Log1c.rdo ', ' Log2c.rdo ') SIZE 5000k;
ALTER DATABASE ADD LOGFILE GROUP (' Log1a.rdo ', ' Log2a.rdo ') SIZE 5000k;
2. Create Redo Log Members
Add the Redo log file Log3a.rdo to the Redo log group numbered 10
ALTER DATABASE ADD LOGFILE MEMBER ' Log3a.rdo ' to GROUP 10;
Scenario 112: Renaming the Redo log file
Required Permissions:
ALTER DATABASE system permissions
Operating system permissions to copy files to the specified file
Permissions to open and back up the database.
Steps:
(1) Using the shutdown command to close the database
(2) Copy the redo log file to a new location, and then rename it using the operating system name.
(3) Use the Startup Mount command to mount to the database.
(4) Rename the Redo data file of the database using the ALTER DATAFILE statement with the rename file clause
ALTER DATABASE
RENAME FILE ' Log1a.rdo ', ' Log2a.rdo '
To ' Log1b.rdo ', ' Log2b.rdo ';
(5) Open the database with the ALTER DATABASE open command
Verify that the Redo log file is renamed
SELECT Group#,status,member from V$logfile;
Scenario 113: Delete Redo Log Composition members
To delete a redo log group:
ALTER DATABASE DROP LOGFILE GROUP 10;
To delete a redo log member:
ALTER DATABASE DROP LOGFILE MEMBER ' Log2c.rdo ';
Scenario 114: Emptying the Redo log file
ALTER DATABASE CLEAR LOGFILE GROUP 10;
ALTER DATABASE CLEAR unarchived LOGFILE GROUP 10;
Scenario 115: Specify an archive destination
To specify multiple archive destinations, follow these steps:
(1) Using the shutdown command to close the database
(2) Use Serbice to set a valid network service name
log_archive_dest_1 = ' location = C:\app\Administrator\product\11.1.0\db_1\database\archive '
log_archive_dest_2 = ' location = D:\app\Administrator\product\11.1.0\db_1\database\archive '
Log_archive_dest_3 = ' location = F:\app\Administrator\product\11.1.0\db_1\database\archive '
(3) You can edit initialization parameters Log_archive_format set the initialization mode of the document file
Log_archive_dest_4 = ' SERVICE = Standby1 '
Log_archive_format = Arch_%t_%s_%r.arc
Log_archive_dest = ' C:\app\Administrator\product\11.1.0\db_1\database\archive
Log_archive_duplex_dest = ' D:\app\Administrator\product\11.1.0\db_1\database\archive '
Scenario 116: Managing Archiving Mode
There are two modes of operation for archived logs: ARCHIVELOG (archive mode) and Noarchivelog (non-archive mode)
To switch the archive mode
(1) Use shutdown to close the data file
(2) Back up the database.
(3) Edit the initialization parameter file, set the location of the archive log file
(4) Where to archive log files using the Startup Mount statement
(5) Switch to archive mode using the ALTER DATABASE archivelog statement, or switch to non-archive mode.
(6) Reopen the database using the ALTER DATABASE open statement.
ALTER SYSTEM SET log_archive_max_processes=4;
Scenario 117: Performing a manual archive
To set the database manual archiving mode:
ALTER DATABASE ARCHIVELOG MANUAL;
To manually archive all log files:
ALTER SYSTEM ARCHIVE LOG all;
Scenario 118: View the current DB Instance archive mode
SELECT Log_mode from V$database;
Scenario 119: Viewing log information
Scenario 120: View Historical Archive log information using the V$archived_log view
SELECT NAME, sequence#, STATUS from V$archived_log;
Scenario 121: Use V$archive_dest view to view archived destination information
COL dest_name FORMAT A20
COL DESTINATION FORMAT A20
SELECT Dest_name, STATUS, DESTINATION from V$archive_dest;
Scenario 122: View archive process information using the V$archive_processes view
SELECT * from V$archive_processes;
Scenario 123: Using V$tempfile view temporary file information
SELECT RECID, STAMP, thread#, sequence# from V$backup_redolog;
Scenario 124: View temporary file information using V$tempfile
COL NAME FORMAT A40
SELECT NAME, STATUS, BYTES from V$tempfile;
Scenario 125: Viewing the warning log file
SELECT VALUE from V$parameter
WHERE name= ' background_dump_dest ';
Oracle442 Application Scenarios-----------Oracle Database Physical Structure