Oracle Database Architecture Analysis

Source: Internet
Author: User

Oracle Database Architecture Analysis

Shows the Oracle database architecture,

We can see that the Oracle database server is roughly divided into two main parts: 1. Oracle instance (upper part of the database instance) 2. database (lower part of the database ).

The user cannot directly connect to the database. Instead, the user sends out a user process, interacts with the server process sent by the Oracle server, then the server process interacts with the instance, and finally the instance interacts with the underlying database, this allows users to interact with databases. The specific process is as follows:

The following describes the specific functions and features of each component in the architecture.

1. First of all, we should have a general understanding:

Oracle Server = instance + database

Instance = SGA (instance memory) + background process

Memory = SGA + PGA

SGA = database buffer cache (cache in the cache area of the database) + share pool + redo log buffer

2. instance memory SGA structure: the first three are required, and the rest are optional.

1) database buffer cache (database cache): used to store the SQL Execution Region (1) images for storing data blocks obtained from data files (2) the performance can be greatly improved when data is obtained and updated, because the database block image will be saved for a period of time, and the next time you perform operations on the block, you can directly operate in the cache area (3) dynamically adjust the size or automatically manage it.

2) Redo log buffer (online log buffer): () records the changes of all database blocks for recovery. The size is determined by log_buffer. Before performing operations on Database blocks, all the operations are recorded here.

3) share pool: Includes library cache (database high-speed buffer) and database dictionary cache (data dictionary cache ), the former mainly stores information about recently used SQL and pl/SQL statements (stored multiple times to prevent hard parsing), and the latter mainly stores recently used definitions, such as tables, indexes, columns, user permissions, and other database objects.

4) Large pool: it is an optional part of the global system zone. It is the session memory of the Shared Server to reduce the burden on the shared pool and dynamically change the size, it can also be automatically managed.

5) Java pool: Analysis of java commands, which can be dynamically changed or automatically managed

6) Stream pool: the Stream-related data is in the Stream pool to improve the cache effect.

3. Next we will introduce the background process of the instance:

1) pmon (Process detection process): processes that fail to be cleared when a session ends abnormally include rolling back transactions, Releasing locks, and dynamically registering listeners,

2) Smon: instance recovery (roll back all changes in redo logs, roll back uncommitted transactions), and release temporary tablespace.

3) DBWn (Data Writing Process): refresh the dirty blocks in the cache of the database to the disk data file. The write condition is: 1. Check Point 2 occurs, when the dirty cache reaches the limit (1/4), there is no available cache zone.

4) LGWR refreshes the redo logs in the redo log buffer to the disk log file. The write conditions are as follows: 1. At the time of commit, 2. when it reaches 1/3 full, 3. when the size reaches 1 MB, 4. every three seconds. before the DBWR process writes data.

5) CKPT (Checkpoint Process): Signals dbwn to start writing dirty blocks. Update the data file header and control file (that is, update the scn number to the latest) once every 3 seconds. It is often said that the consistency should ensure the consistency of the three scn numbers, including the scn Number of the data file header, the scn Number of the control files record data file header, and the total scn Number of the control files record. The faster the Checkpoint Process triggers, the faster DBWR writes. In this way, the less dirty blocks are in the cache area in the memory, and the less time it takes to recover the instance, however, frequent I/O results in performance degradation. Therefore, the performance and speed are in conflict;

6) ARCn (archiving process): archive online redo log files to (can also be understood as copying) archive log files. If archive mode is enabled, redo log files are generally divided into two groups, these two groups are recyclically written. One group is used to record changes to the database, and the other group is archived. However, if the group of records modified is already full, it is time to write another group, but the other group has not been archived. This will cause problems and the database will be stuck.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.