Oracle442 application scenarios --------- physical structure of Oracle Database

Source: Internet
Author: User

Oracle442 application scenarios --------- physical structure of Oracle Database

------------------------- Physical structure of Oracle Database -------------------------------

Physical Structure of Oracle Database
Oracle data is actually saved in the form of files, which are stored in addition to user data,
You also need to save management data and log data. As a DBA, you must know where your data is stored separately,
In addition, experienced administrators should reasonably plan the storage location, initial size, and incremental attributes of files during database design,
This not only meets the needs of user applications, but also ensures the efficient operation of the database.
Each Oracle data is composed of three types of files: control files, data files, and log files.
These files provide physical storage for real data storage.

1. Control File
Each Oracle database has a control file that stores the physical structure information of the database.
The control file contains the database name, database data file and log file name, location, database creation date, and other information.
Database Control files are used to identify databases and log files. They must be opened when database operations are started.
When the composition of the database is changed, Oracle automatically updates the control file of the database. Control files are also required for data recovery.
When the database is opened, the Oracle database must write the control file. Without control files, the database cannot be installed, and it is very difficult to restore the database.

2. Data Files
During data operations, the system reads data from the data file and stores the data in the Oracle memory buffer.
The newly created or updated data does not need to be written to the data file immediately, but is temporarily stored in the memory. The database write process (DBWn) decides to write the data to the data file at the appropriate time. In this way, the number of disk accesses can be greatly reduced to enhance the system performance.

3. Log Files
Each database has two or more log file groups for collecting database logs.
The main function of a log is to record the changes made to the data. All changes made to the database are recorded in the log.
If you cannot permanently write the modified data to a data file when a fault occurs, you can obtain the modified log,
This ensures that the operation results are not lost.
The main function of log files is to prevent data loss when a database fails. To prevent Log File faults,
Oracle allows the use of image logs to maintain more than two log copies on different disks.

--------------------------------------------------------------------------------
Redo log management

Redo logs are composed of more than two files to protect all changes in the database.
Each Oracle database instance has a related redo log to ensure database security.

Redo logs are composed of redo records, and each redo log is composed of a group of change elements. The change elements record the changes of each individual data block in the database. For example:
When you modify a data record in a table, the system automatically generates a redo record.
You can use redo logs to Restore database changes and protect data rollback. When you use redo to recover data to the database,
The database reads the change element from the redo log and applies the change to the relevant data block.
The database must contain at least two redo logs, one of which is always writable and used to record database changes,
The other one is used for archiving (when the system's achievelog mode is enabled ).
So how does Oracle record redo logs? The log writing process LGWR records redo logs.
If the redo log file is full, LGWR writes the changed data to the next redo log file;
If the last valid redo log is filled, LGWR writes the changed data to the first redo log.

To prevent the redo log from being damaged, Oracle provides a multi-element redo log, that is, the system automatically maintains two or more copies of the log at different locations.
From the security perspective, the copies should be stored on different disks.
Diversity is achieved by creating a redo log group, which includes a redo log file and its Multi-copy. Each redo log group is defined by numbers, such as group 1 and group 2. Each log file must be in the active state, so that LGWR can simultaneously write these two log files, LGWR will not write different groups of log files at the same time.
In different cases, when the redo log is invalid, the LGWR lock uses the following actions:
1) LGWR can write at least one member file in the group: The write operation is completed normally. LGWR writes accessible member files to the group and ignores inaccessible member files.
2) during log switching, LGWR cannot access the next group because the group needs to be archived: temporarily stop the database operation and wait until the group can be accessed or the group has been archived.
3) During log switching, all members in the next group cannot be accessed due to media damage: the ORACLE database returns an error and the database instance is disabled.
In this case, you need to recover the media from the valid redo log (database recovery ). If the database checkpoint has exceeded the lost redo log,
You do not need to recover the media because the data recorded in the redo log is written to the data file. Now, you only need to delete invalid redo log groups.
If the DATABASE has not archived invalid logs, execute alter database clear unarchived log to disable archiving,
In this way, you can delete the log file.
4) When LGWR is written, all member files in the group are suddenly inaccessible: Oracle returns an error and the database instance is closed.
In this case, you must restore the media from a valid redo log file. If the media is not damaged but is accidentally dropped, you do not need to restore the media,
You only need to restore the media online, and then let the database execute automatic instance recovery.

You can set the number of redo logs using the following parameters:
MAXLOGFILES: You can use the MAXLOGFILES parameter in the create database statement to specify the maximum number of Log File groups that can be reused in each DATABASE.
MAXLOGMEMBERS: You can use the MAXLOGMEMBERS parameter in the create database statement to specify the maximum number of log files contained in each log file group.


--------------------------------------------------------------------------------
Archive log files and archive Modes

An archived log file is a backup of members of the redo log file group. It consists of a redo project and a unique log serial number.
When the database is in archive mode, the log writing process (LGWR) cannot reuse or rewrite Unarchived redo log groups.
If the automatic archiving mode is set, ARCn will automatically perform the archiving operation in the background. The database starts multiple processes for archiving to ensure that logs are immediately archived once they are filled up.

Archive log files can be used for the following purposes:

1) restore the database
2) Update the backup database
3) Use LogMiner to obtain historical database information
You can select automatic archiving or manual archiving, but the automatic archiving mode is more convenient and convenient.
The LGWR process writes log information to the online redo log file. Once the redo log is full, the ARC0 process archives the log.


--------------------------------------------------------------------------------
SCN

SCN is short for System Change Number. It is an important mechanism of a database and can be used to record and identify the sequence in which database operations are performed.
The sequence of database operations is very important when performing rollback transactions, backup and database recovery operations.

SCN is an integer that can only be increased. The system can ensure that this integer does not cross the border. When you modify data (ADD, modify, delete) in a database,
The Oracle database does not immediately Save the data to a data file, but stores the data in the buffer cache. When a transaction is committed,
To write the data into the data file.

The following is a simple process for modifying data:
1) Start a transaction. A transaction is a logical unit of work that contains a group of database operations.
2) Search for the data to be modified in the buffer. If not, find the data from the data file and load it into the data buffer.
3) modify the data block in the buffer and save the Modification result to the log buffer. Because the data in the buffer zone is inconsistent with the data in the data file,
Therefore, this data is called "dirty data"
4) when the user submits data, the LGWR process writes the data in the buffer zone and the newly generated SCN to the redo log file ,.
However, Oracle does not immediately write dirty data to the data file to reduce I/O operations.
5) If a CheckPoint occurs, the CheckPoint is an event. When this event occurs,
The DBWn process writes all the changed database buffers in the SGA to the data file ). Then the CKPT process wakes up the DBWn process,
Update all data files and control files in the database, and mark the latest checkpoint to start the next update from the latest checkpoint.


--------------------------------------------------------------------------------
The Checkpoint event is triggered when the shutdown normal and shutdown immediate statements are executed.
When a Checkpoint event occurs, Oracle writes the SCN to the following four places:

1) system checkpoint scn (system checkpoint scn)
After a CheckPoint action is completed, Oracle saves the scn of the system CheckPoint to the control file. You can view the SCN value of the system CheckPoint from view V $ DATABASE,

The Code is as follows:
SELECT CHECKPOINT_CHANGE # from v $ DATABASE;

2) data file checkpoint scn (datafile checkpoint scn)
After a CheckPoint action is completed, Oracle saves the SCN of each data file to the control file.
You can see the value of the data file checkpoint SCN from the System View V $ DATAFILE,
The Code is as follows:
Select name, CHECKPOINT_CHANGE # from v $ DATAFILE;

3) start scn (start scn)
Oracle stores the checkpoint SCN of each data file in the file header of each data file, which is called the startup SCN.
When the database instance is started, Oracle checks whether the startup SCN of each database file is consistent with the control file checkpoint SCN,
If they are inconsistent, find the lost SCN from the redo log file and re-write it to the data file for recovery.
You can view the startup SCN value from the System View V $ DATAFILE_HEADER,

The Code is as follows:
Select name, CHECKPOINT_CHANGE # from v $ DATAFILE_HEADER;

4) end SCN (stop scn)
The ending SCN of each data file is saved in the control file. Normally, the SCN of all data files in online read/write mode is NULL.
You can view the ending SCN value from the System View V $ DATAFILE,
The Code is as follows:
Select name, LAST_CHANGE # from v $ DATAFILE;

When the database is shut down normally (executing shutdown normal and shutdown immediate), the CHECKPOINT event is triggered,
Write the data in the redo log file to the data file, and update the SCN in the preceding 4 to the latest value.
When the Oracle database is started and running normally, the control file checkpoint SCN and data file checkpoint SCN are consistent,
The ending SCN of each data file in the control file is NULL.

When necessary, the system automatically generates a new SCN Based on the timestamp. You can view the latest SCN generated by the system from the DUAL table. The Code is as follows:
SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER from dual;

Bytes -----------------------------------------------------------------------------------

Application Scenario 99: View Control File Information

Use Case 100: Create an initial control file

Use Case 101: Create a control file copy

Use Case 102: Create a new control file

Select member from v $ LOGFILE;
Select name from v $ DATAFILE;


Application Scenario 103: Restore Empty files
1. The control file is damaged, but the directory for storing the control file can still be accessed:
Run the SHUTDOWN command to shut down the database instance, copy the control file to the control file directory, and run the STARTUP command to start the database instance.
2. The storage media is damaged, and the directory of the storage control file cannot be accessed.
First, shut down the database instance, and then run the operating system command to copy the control file to a new accessible directory. Modify the contro_files parameter to change the invalid control file directory to a new directory. Finally, run the STARTUP command to open the database instance.

Use Case 104: delete Control Files

Steps:
(1) shut down the database
(2) edit the value of the CONTROL_FILES parameter to delete the specified control file.
(3) control files to be deleted are allocated to other media and deleted using the operating system commands.
(4) restart the database

Use Case 105: View data file information

Select name, STATUS, bytes from v $ DATAFILE;

Use Case 106: create a data file

Create a tablespace and a 50 MB Data File
Create tablespace mytabs datafile 'C: \ Users \ Administrator \ Desktop \ test \ mytemfile01.dbf' SIZE 50 M

Create a temporary tablespace and a temporary file of 10 MB
Create temporary tablespace temtbs tempfile 'C: \ Users \ Administrator \ Desktop \ test \ mytemp01.dbf' SIZE 50 m entent management local;

You can use the add datafile keyword to modify a tablespace by using the alter tablespace statement.

Alter tablespace mytabs add datafile 'C: \ Users \ Administrator \ Desktop \ test \ mydata. dbf' SIZE 50 M;

Application Scenario 108: modifying the online status of data files

Change online files to offline status
Alter database datafile 'C: \ Users \ Administrator \ Desktop \ test \ users01.dbf' OFFLINE;

Set the data file to online
Alter tablespace hrman datafile online;

Use Case 109: delete data files

Delete the mytab of the tablespace and the data file.
Drop tablespace mytab including contents cascade constraints;

Deletes a specified data file.
Alter database datafile 'C: \ Users \ Administrator \ Desktop \ test \ MYTAB. dbf' offline drop;

Use Case 110: View redo log information

View query view V $ LOGFILE
Select group #, STATUS, member from v $ LOGFILE;

View the view V $ LOG_HISTORY to display the historical information of the redo log.
Select recid, FIRST_CHANGE #, NEXT_CHANGE #, RESETLOGS_CHANGE # from v $ LOG_HISTORY;

Application Scenario 111: Create a redo log combination Member
1. Create a log Group
Alter database add logfile ('log1c. rdo ', 'log2c. rdo') SIZE 5000 k;

Alter database add logfile group 10 ('log1a. rdo ', 'log2a. rdo') SIZE 5000 k;

2. Create a redo log Member
Add the redo log file log3a. rdo to the redo log group numbered 10.
Alter database add logfile member 'log3a. rdo 'to group 10;

Application Scenario 112: Rename and redo log files

Required permissions:
Alter database system permission
Operating system permission for copying a file to a specified file
Permission to open and back up databases.

Steps:
(1) Use the shutdown command to close the database
(2) copy the redo log file to a new location, and rename it using the operating system name.
(3) run the startup mount command to load data to the database.
(4) use the alter datafile statement with the rename file clause to rename the redo data file of the database

ALTER DATABASE
Rename file 'log1a. rdo ', 'log2a. rdo'
TO 'log1b. rdo ', 'log2b. rdo ';
(5) run the alter database open command to open the database.

Verify that the redo log file is renamed
Select group #, STATUS, member from v $ LOGFILE;

Application Scenario 113: delete a redo log combination Member

Delete redo log group:
Alter database drop logfile group 10;

Delete redo log members:
Alter database drop logfile member 'log2c. rdo ';

Application Scenario 114: Clear the redo log file

Alter database clear logfile group 10;

Alter database clear unarchived logfile group 10;

Application Scenario 115: Specify the archiving destination

To specify multiple archiving destinations, follow these steps:
(1) Use the SHUTDOWN command to close the database
(2) Use SERBICE to set a valid network service name

LOG_ARCHIVE_DEST_1 = 'location = C: \ app \ Administrator \ product \ 11.1.0 \ db_1 \ database \ archive'
LOG_ARCHIVE_DEST_2 = 'location = D: \ app \ Administrator \ product \ 11.1.0 \ db_1 \ database \ archive'
LOG_ARCHIVE_DEST_3 = 'location = F: \ app \ Administrator \ product \ 11.1.0 \ db_1 \ database \ archive'
(3) You can edit the initialization parameter LOG_ARCHIVE_FORMAT to set the initialization mode of the document file.
LOG_ARCHIVE_DEST_4 = 'service = standby1'

LOG_ARCHIVE_FORMAT = arch _ % t _ % s _ % r. arc

LOG_ARCHIVE_DEST = 'C: \ app \ Administrator \ product \ 11.1.0 \ db_1 \ database \ archive
LOG_ARCHIVE_DUPLEX_DEST = 'd: \ app \ Administrator \ product \ 11.1.0 \ db_1 \ database \ archive'

Application Scenario 116: Archive Management
Archive logs can be operated in two modes: ARCHIVELOG (archive mode) and NOARCHIVELOG (non-archive mode)

Steps for switching archive Mode
(1) Use SHUTDOWN to close data files
(2) back up the database.
(3) edit the initialization parameter file and set the location of the archived Log File
(4) use the startup mount statement to archive log files
(5) use the alter database archivelog statement to switch to the archive mode or non-archive mode.
(6) use the alter database open statement to reopen the DATABASE.

Alter system set LOG_ARCHIVE_MAX_PROCESSES = 4;

Application Scenario 117: manual archiving

Set the manual database archiving mode:
Alter database archivelog manual;
Manually archive all log files:
Alter system archive log all;

Application Scenario 118: view the archiving mode of the current database instance

SELECT LOG_MODE from v $ DATABASE;

Use Case 119: view log information

Application Scenario 120: Use the V $ ARCHIVED_LOG view to view historical archived log information

Select name, SEQUENCE #, status from v $ ARCHIVED_LOG;

Application Scenario 121: Use the V $ ARCHIVE_DEST view to view the information of the archive destination.

COL DEST_NAME FORMAT A20
Col destination format A20
SELECT DEST_NAME, STATUS, destination from v $ ARCHIVE_DEST;

Application Scenario 122: Use the V $ ARCHIVE_PROCESSES view to view archived Process Information

SELECT * from v $ ARCHIVE_PROCESSES;

Application Scenario 123: Use V $ TEMPFILE to view temporary file information

Select recid, STAMP, THREAD #, SEQUENCE # from v $ BACKUP_REDOLOG;

Application Scenario 124: Use V $ TEMPFILE to view temporary View File Information

Col name format A40
Select name, STATUS, bytes from v $ TEMPFILE;

Use Case 125: view the warning Log File

Select value from v $ PARAMETER
Where name = 'background _ dump_dest ';

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.