Oracle-based Oracle Architecture

Source: Internet
Author: User
Tags dedicated server server memory

I. Overview of Oracle Architecture:

Oracle's architecture refers to the composition of the database, the working process and principle, and the Organization and management mechanism of the data in the database. To understand the architecture of the Oracle database, you must understand the important concepts and major components of the Oracle system.

The Oracle system architecture consists of three parts: the memory structure, the process structure, and the storage structure. As shown in the following:


1. Memory Structure (SGA, PAG)

The memory structure includes the system global AREA,SGA and the program Global Zone (Programs gloabl AREA,PGA).


  2. Process structure

The process structure includes the foreground process and the background process . The foreground process refers to the service process and the user process. The foreground process runs according to the actual needs and ends immediately after the need is completed. A background process is a number of operating system processes that start automatically after the Oracle database is started.

  3. Storage structure (Database)

Oracle's storage structure consists primarily of logical and physical structures. A physical structure refers to a set of files in a system. A logical structure is a hierarchical structure. Mainly consists of: Table space, segment, area and data blocks and other concepts.

Second, the memory structure

Memory architecture is the most important part of Oracle, and memory is the first factor that affects database performance.

The main contents of Oracle's memory storage are as follows:

1, program code (PL-SQL, Java).

2. Information about connected sessions, including all currently active and inactive sessions.

3, the program must run the relevant information, such as the query plan.

4. Information that is shared between Oracle processes and information that is communicated to each other, such as locks.

5, which do not permanently stored on the peripheral storage media, the data is cached in memory. (such as blocks of data).

Depending on how the memory is used, Oracle's memory is divided into the system global Area (SGA), the program Global Zone (PGA)

  1. SGA: (System Global AREA,SGA) All users have access to the instance shared memory area. Data blocks, transaction logs, data dictionary information, and so on are stored in the SGA.

The SGA is a set of shared buffer stores that Oracle systems assign to an instance to hold database data and control information for the management and operation of database data.

The SGA mainly includes:

  1) Data buffer:    

A block buffer (Datablockbuffercache) is a cache area in the SGA that is used to store data blocks (such as tables, indexes, and clusters) that read data segments from the database. The size of the block buffer is determined by the Db_lock_buffers parameter in the database server Init.ora file (represented by the number of database blocks). Adjusting the size of the block buffers is an important part of adjusting and managing the database.

Because the size of the block buffer is fixed and its size is usually smaller than the space used by the database segment, it cannot load all the database segments in memory at one time. Typically, a block buffer is just a database size 1%~2%,oracle uses the least recently used (lru,leastrecentlyused) algorithm to manage the available space. When the store needs free space, the least recently used block will be moved out, and the new data block will be substituted for its location in the storage area. In this way, the most frequently used data is saved in the store.

However, if the SGA is not large enough to accommodate all of the most commonly used data, different objects will contend for space in the block buffer. This is most likely to happen when multiple applications share the same SGA. At this point, the most recent segment of each app competes for space in the SGA with the most recent segment of the other app. As a result, data requests to block buffers will have a lower hit rate, resulting in degraded system performance.


  2) Dictionary buffer:

Information about database objects is stored in a data dictionary table that includes user account data, data file names, segment names, extents, table descriptions, and permissions, and when the database needs this information, such as checking the user's authorization to query a table, the data dictionary table is read and the returned data is stored in the SGA of the dictionary cache.

The data dictionary buffers are managed by a least recently used (LRU) algorithm. The size of the dictionary buffers is managed internally by the database. The dictionary buffers are part of the SQL shared pool, and the size of the shared pool is set by the Shared_pool_size parameter in the database file Init.ora.

If the dictionary buffers are too small, the database will have to repeatedly query the data dictionary tables to access the information required by the database, called Circular calls (Recuesivecall), at which the query speed is lower relative to the dictionary cache when the query is completed independently.


  3) Log buffer:

The redo item describes the modifications made to the database. They are written to the online redo log file to be used for forward scrolling during database recovery. However, before being written to the online redo log file, the transaction is first recorded in the SGA called the Redo log buffer (redologbuffer). The database can optimize this operation by periodically batching the contents of the write redo entries in a log file to the online redo. The size, in bytes, of the redo log buffer is determined by the Log_buffer parameter in the Init.ora file.

  4) Shared pool:    

The SQL Shared pool stores data dictionary buffers and library buffers (Librarycache), which are statement information that operates on the database. Library buffers allow common SQL statements to be shared when data block buffers and dictionary buffers share the structure and data information between database users.

The SQL shared pool includes the syntax analysis tree for execution plans and SQL statements that run the database. When you run the same SQL statement for the second time (by any user), you can take advantage of the syntax analysis information available in the SQL shared pool to speed up execution.

The SQL shared pool is managed by the LRU algorithm. When the SQL shared pool fills up, the least recently used execution path and parsing tree are deleted from the library cache to make room for new entries. If the SQL shared pool is too small, the statement will be continuously loaded into the library buffer, which can affect operational performance.

The size of the SQL shared pool, in bytes, is determined by the Init.ora file parameter, shared_pool_size.


  5) Large pool:

Candidates (Largepool) is an optional memory area. If you use the thread server option or perform frequent backup/restore operations, you can manage these operations more efficiently by creating a large pool. The large pool will be dedicated to supporting SQL large commands. With large pools, these SQL-large commands can be prevented from rewriting entries into the SQL shared pool, thereby reducing the number of statements that are then loaded into the library cache. The size of the large pool, in bytes, is set by the Large_pool_size parameter of the Init.ora file, and the user can set the minimum position in the large pool using the Large_pool_min_alloc parameter of the Init.ora file. Oracle8i has not used this parameter. As a choice for using Largepool, you can reserve a subset of SQL shared pools for SQL large statements with the Shared_pool_reserved_size parameter of the Init.ora file.


  6) Java Pool:

As the name indicates, the Java pool provides syntax parsing for Java commands. The size of the Java pool, in bytes, is set by the Java_pool_size parameter of the Init.ora file introduced in Oracle8i. The java_pool_size parameter of the Init.ora file is set to 10MB by default.


  7) Multi-buffer pool:

Multiple buffer pools can be created in the SGA to separate large datasets from other applications with multiple buffer pools, reducing the likelihood that they will compete for the same resources within the cache area of the data block. For each buffer pool created, specify the size and number of its LRU latches. The number of buffers must be at least 50 times times greater than the number of LRU latches.

When you create a buffer pool, you need to specify the size of the Save area (Keeparea) and the size of the recirculation area (Recyclearea). Like the reserved area of the SQL shared pool, the Save area holds entries, and the Recycle zone is reused frequently. The size of the area can be saved by the Buffer_pool_keep parameter. For example, the capacity of the save and recycle buffer pool reduces the free space in the block buffer store (set by the Db_block_buffers parameter). For a table that uses a new buffer pool, the name of the buffer pool is defined by the Buffer_pool parameter in the storage clause of the table. For example, if you need to quickly delete a table from memory, give it to the recycle pool. The default pool is called default, which allows you to transfer a table to the default pool later with the altertable command.

  2. PGA: (program gloabl AREA,PGA) A class of memory that is not shared, dedicated to a particular server process, and can only be accessed by this process.

The PGA contains the data and control information required for a single server process or a single background process. The PGA is automatically assigned when the user process connects to the database and creates a session, preserving the memory required for each user process that is connected to the Oracle database. The PGA is a non-shared zone that can only be used by a single process, but after a user session has ended, the PGA is released.

  Note: The difference between PGA and SGA:

1. The PGA, like the SGA, is an area that the Oracle database system allocates for the session in server memory. The effect of the two is different and the degree of sharing is different.

2. The SGA system global zone is shared across all processes within the system. The PGA Program Global Zone is primarily for the service of a user process.

  3. UGA: (User Global Area,uga) This memory area stores session state for user processes. Depending on whether the user database is configured for private server mode or shared server mode, UGA can be used as part of the SGA or PGA. It stores data for user sessions.

III. structure of the process

In Oracle instances, processes fall into two categories: User processes and Oracle processes. Oracle processes are divided into two categories: server processes and background processes. These 3 processes are described below.

  1. User process

The user process starts when the database user requests a connection to the Oracle server. When a user runs an application, Oracle establishes a user process for the user.


  2. Server process

The server process is used to process requests for user processes that connect to the instance. The SQL statement that the customer sends to the database is eventually received and executed by the process. The server process can handle requests from only one user process, or it can handle requests from multiple user processes, so it is divided into dedicated servers and shared servers .

Listener.ora file, code server=dedicated, meaning is set as a dedicated server.

It can perform the following tasks:

1) The corresponding SQL statements issued by the lock are parsed and executed.

2) Read the necessary chunks of data from the disk (data file) into the shared database buffer of the SGA (when it is not in the buffer).

3) Return the results to the application for processing.

  3. Background process

The background process starts with the database and is used to complete various maintenance tasks, such as writing to disk quickly, maintaining an online redo log, cleaning up an abnormally aborted process, and so on. An Oracle instance can use many background processes, but they do not always exist.

Background processes include:

  1) Pmon Process monitoring process  

The process performs a process recovery in the event of a user process failure, is responsible for cleaning up the internal storage and freeing the resources used by the process. Example: It resets the state of the active transaction table, releases the block, and removes the ID of the failed process from the active process table. Pmon also periodically checks the status of the scheduling process (DISPATCHER) and the server process, and restarts (excluding intentionally deleted processes) if it is dead.

Pmon is regularly woken up to check if it is needed, or it can be called when other processes find it necessary.


2) Smon System monitoring process

  When the process instance starts, it performs an instance recovery and cleans up temporary segments that are no longer in use. In an environment with parallel server options, Smon restores instances of failed CPUs or instances. The Smon process is woken up regularly, checked for need, or can be called when other processes are found to be needed.


3) DBWR Database write process  

The process executes an Oracle background process that writes buffers to the data file and is responsible for buffer store management. When a buffer in the buffer is modified, it is flagged as "dirty", and the main task of DBWR is to write the "dirty" buffer to the disk, leaving the buffer "clean". The number of unused buffers is reduced because buffers in the buffer store are populated into the database or are dirty by user processes. When the unused buffer drops to a minimum, and the user process is unable to find the unused buffer when it reads the block from the disk into the memory store, DBWR manages the buffer store so that the user process can always get an unused buffer.

Oracle uses the LRU (LEAST recently used) algorithm (the least recently used algorithm) to keep the in-memory data blocks most recently used, minimizing the I/O. DBWR is expected to write the dirty buffer to disk in the following situations:

When a server process moves a buffer into a "dirty" table that is dirty to the critical length, the service process notifies DBWR to write. The critical length is half the value of the parameter Db-block-write-batch.

When a server process looks for the db-block-max-scan-cnt buffer in the LRU table, it does not find the unused buffer, it stops finding and notifies DBWR to write. A timeout (3 seconds per time) occurs and DBWR notifies itself. When a checkpoint occurs, LGWR notifies DBWR. In the first two cases, DBWR writes the block in the dirty table to disk, each time the number of writable blocks is specified by the initialization parameter Db-block-write-batch. If the dirty table does not have a buffer that specifies the number of blocks in this parameter, DBWR looks for another dirty buffer from the Lur table.

If the DBWR is inactive within three seconds, a timeout occurs. In this scenario, DBWR finds the specified number of buffers on the LRU table, and writes any dirty buffers that are found to the disk. Whenever a timeout occurs, DBWR looks for a new buffer group. The number of buffers that are found each time by DBWR is twice times the value of the Db-block-write-batch parameter. If the database is running empty, DBWR eventually writes all the buffer stores to disk.

When a checkpoint is present, LGWR specifies that a modified buffer table must be written to disk. DBWR writes the specified buffer to disk.

On some platforms, one instance can have multiple DBWR. In such instances, some blocks can be written to one disk, and others can be written to other disks. Parameter Db-writers controls the number of DBWR processes.


  4) LGWR Log write process

The process writes the log buffer to a log file on disk, which is an Oracle background process responsible for managing the log buffers. The LGWR process outputs all the log entries since the last time it was written to disk, LGWR output:

Writes a commit record when the user process submits a transaction.
The log buffers are output every three seconds.
The log buffer is output when 1/3 of the journal buffer is full.
The log buffer is output when DBWR writes the modified buffer to disk.

The LGWR process writes synchronously to the active Mirror online log file group. If a file in the group is deleted or unavailable, LGWR can continue to write additional files for that group.

The log buffer is a circular buffer. When LGWR writes a log entry for a log buffer to a log file, the server process can write a new log entry to the log buffer. LGWR is usually written very quickly, ensuring that the log buffers always have room to write new log entries.

Note : Sometimes when more log buffers are needed, LWGR writes out log entries before a transaction commits, and these log entries are only persisted after a subsequent transaction is committed.

Oracle uses the fast commit mechanism, when a commit statement is issued by a user, a commit record is immediately placed into the log buffer, but the corresponding data buffer changes are delayed until they are written to the data file when they are more efficient. When a transaction commits, it is assigned to a system modification number (SCN), which is recorded in the log along with the transaction log entry. Because the SCN is logged in the log, the recovery operation can be synchronized in the case of a parallel server Option configuration.


  5) Arch Archive process.

  The process copies the filled online log files to the specified storage device. The log is used for Archivelog and can be archived automatically when the arch process exists.


6) ckpt Check point.  

The process modifies the title of all data files when the checkpoint appears, indicating the checkpoint. In the usual case, the task is performed by LGWR. However, if the checkpoint significantly reduces system performance, the CKPT process can be run, separating the work of the checkpoint that was originally performed by the LGWR process and implemented by the CKPT process. For many application scenarios, the CKPT process is not necessary. Only when a database has many data files, LGWR significantly degrades performance at checkpoints to make ckpt run. The CKPT process does not write blocks to disk, and the work is done by DBWR. Initialization parameters checkpoint-process control the enable or render of the CKPT process. The default is false, which is to make it impossible.

Due to inconsistencies in the work of LGWR and DBWR in Oracle, Oracle introduces the concept of checkpoints for synchronizing databases and ensuring database consistency. Within Oracle, checkpoints are divided into two types: full checkpoint and incremental checkpoint. Here we describe the effects of these two checkpoints separately:

1. Full checkpoint

Before Oracle8i, the checkpoint that occurs in the database is a full checkpoint, and the full checkpoint writes all the dirty chunks of data in the buffer to the appropriate data file, synchronizing the data file header and the control file to ensure that the database is consistent. A full checkpoint can occur after 8i only in the following two scenarios:

(1) The DBA executes the command of the alter system checkpoint manually;

(2) database normal shutdown (immediate,transcational,normal).

Because a full checkpoint writes all of the dirty database blocks, huge IO often affects the performance of the database. So Oracle introduced the concept of incremental checkpoints from 8i onwards.

2. Incremental checkpoint

Oracle introduced the checkpoint queue concept from 8i to record information about all the current dirty blocks in the database, and DBWR to write the dirty blocks to the data file based on the queue. Checkpoint queue records the information of the dirty data block in the database in chronological order, the entry contains the RBA (Redo block Address, the redo log is used to identify the number of data blocks in the redo log for the first time during the checkpoint) and the data file number and block number of the data block. During checkpoints, regardless of the number of changes in the data block, the location of the checkpoint queue remains unchanged, and the checkpoint queue records only its earliest RBA, ensuring that the earliest changed chunks are written as soon as possible. When DBWR writes the dirty blocks inside the checkpoint queue to the data file, the checkpoint's position is also shifted back, ckpt every three seconds, the checkpoint's location is recorded in the control file to represent the log entry that starts the recovery at instance recovery, a concept known as the "Heartbeat" of the checkpoint. (Heartbeat). After the checkpoint location has changed, Oracle uses 4 parameters to control the distance between the checkpoint location and the last redo log entry. In this case, it should be noted that most people will consider these 4 parameters as controlling the time of the incremental checkpoint. In fact, this is wrong, and these 4 parameters are used to control the number of entries in the checkpoint queue, rather than controlling the occurrence of checkpoints.

(1) Fast_start_io_target

This parameter is used to indicate the total number of IO that needs to be generated when the database is instance recovery, which is estimated by V$filestat Avgiotim. For example, we have a database in 10 minutes after the occurrence of instance crash to complete, assuming OS io per second is 500, then this database occurs instance recovery will probably produce 500*10*60=30,000 times io, That is, we will be able to set the fast_start_io_target to 30000.

(2) Fast_start_mttr_target

We can see from above the Fast_start_io_target to estimate the checkpoint location is more troublesome. To simplify this concept, Oracle introduced Fast_start_mttr_target as a parameter from 9i, which represents the time, in seconds, that the database has instance recovery. This parameter is literally better understood, where mttr is shorthand for mean time to recovery, as in the previous example we can set the Fast_start_mttr_target to 600. When the Fast_start_mttr_target is set, fast_start_io_target this parameter will no longer take effect, from 9i after fast_start_io_target this parameter was abolished by Oracle.

(3) Log_checkpoint_timeout

This parameter represents the time interval, in seconds, between the checkpoint location and the end of the redo log file, which is 1800 seconds by default.

(4) Log_checkpoint_interval

This parameter is the number of redo log blocks that represent the checkpoint location and the end of the redo log, as indicated by the OS block.

(5) 90% of smallest REDO LOG

In addition to the above 4 initialization parameters, Oracle internally actually sets the location of the first 90% of the redo log file at the checkpoint point. In each redo log, the locations specified by these parameters may vary, and Oracle confirms the location of the checkpoint closest to the end of the log file.


7 ) Reco the recovery process.   

The process is a process that is used with distributed options to automatically resolve failures in distributed transactions. A node reco the background process is automatically connected to other databases that contain unresolved distributed transactions, and reco automatically resolves all the dangling transactions. Any rows corresponding to the suspended transactions that have been handled are deleted from the suspended transaction table for each database.

When a database server's reco background process tries to establish communication with the same remote server, if the remote server is unavailable or if the network connection cannot be established, Reco automatically connects again after one interval.

The reco background process only occurs in systems that allow distributed transactions, and the distributed C transactions parameter is greater than 0.

8) Lckn process : is used in a parallel server option environment, can be up to 10 processes (LCK0,LCK1......,LCK9), for the blocking between instances.


9) dnnn process (scheduling process):

This process allows user processes to share limited server processes (server process). When there is no scheduling process, each user process requires a dedicated service process (Dedicatedserver processes). For multi-threaded servers (multi-threaded server), multiple user processes can be supported. If you have a large number of users in your system, a multi-threaded server can support a large number of users, especially in a customer server environment.

Multiple scheduling processes can be established in a DB instance. At least one scheduling process is established for each network protocol. The database administrator determines the optimal number of schedulers to start based on the limit of the number of connections per process in the operating system, and can increase or remove the scheduling process when the instance is running. A multi-threaded server requires Sql*net version 2 or later. In the multi-thread server configuration, a network sink process waits for the customer to apply the connection request and sends each one to a dispatch process. If the customer application cannot be connected to a scheduled process, the network sink process initiates a dedicated server process. The network sink process is not part of an Oracle instance, it is part of the process of processing Oracle-related networking. When the instance starts, the network sink is opened, a communication path is established for the user to connect to Oracle, and each dispatch process gives its receiver the address of the dispatch process for the connection request. When a user process makes a connection request, the network sink process parses the request and determines whether the user can use a scheduling process. If it is, the network sink process returns the address of the dispatch process, after which the user process connects directly to the dispatch process. Some user processes cannot dispatch process traffic (if a user with a previous version of Sql*net is used), the network sink process cannot connect the user to a dispatch process. In this case, the network receiver establishes a dedicated server process to establish a suitable connection.

IV. Storage Structure

The storage structure of Oracle database is divided into logical storage structure and physical storage structure.

1. Physical storage Structure
The physical storage structure mainly describes the external storage structure of the Oracle database, that is, how the data is organized and managed in the operating system.
Physically, the database consists of operating system files such as control files, data files, redo log files, and parameter files .
Therefore, the physical storage structure is related to the operating system platform.

  1) Data file:

is a file that physically stores the Oracle database data. Each data file is associated with only one database. Once the data file is established, it cannot be resized. A table space can contain one or more data files. A data file can belong to only one table space.

  2) Redo log files (Redo log file)

Records all modifications to database data for use when recovering data. The features are as follows: Each database contains at least two log file groups. The log file group is written in a circular manner. Each log file member corresponds to a physical file.

The log switch is set to enable circular use of the log file group. The log switch appears as follows: When a log file group is filled; When the database is closed; The DBA transfers the log switch manually;

A mirrored log file is a copy of two or more online log files that are maintained on different disks to prevent the loss of log files.
The features are as follows: Each log file group contains at least two log file members. Each group has the same number of members. All members of the same group are modified at the same time. The members of the same group are the same size, and the members of different groups can be different sizes.

3) control file

is a small binary file that describes the structure of the database. Map the physical files of the database to logical table spaces and online redo log files in the data dictionary.

4) parameter files (Parameter file)

is a text file that can be modified directly using the text editor under the operating system. The file is accessed only when the database is established or when the instance is launched, and the instance must be closed before the file can be modified.

2. Logical Structure

The logical storage structure mainly describes the internal storage structure of Oracle database, that is, how to organize and manage data in Oracle database from the technical concept.

The tablespace is the largest logical unit, and the block is the smallest logical unit. Therefore, the logical storage structure is independent of the operating system platform and is created and managed by the Oracle database.

  1) Table Space

A tablespace (tablespace) is the largest logical unit that corresponds to one or more data files, and the size of the tablespace is the sum of the size of the data file it corresponds to.

  The table spaces created automatically by Oracle 10g are:

  Example (instance table space): sample tablespace.

  Sysaux (Auxiliary system tablespace): Auxiliary system table space to reduce system load and improve system operation efficiency

  system table space, which is the most important tablespace for managing information about tablespace names, control files, and data files. It belongs to Sys, System two schema (schema), and is only used by those two or other users with sufficient permissions. However, you cannot delete or rename the system table space.

  Temp (Temporary tablespace): Temporary table space holds temporary tables and temporary data for sorting.

  Undotbs (undo tablespace): When our team database table data is added, modified, deleted, the Oracle system automatically uses the Undo table space to temporarily hold the pre-modified data.

  Users (user's tablespace): User table space, which permanently holds user objects and private information, is also a data table space.

  Generally: System users use the systems table space, and non-system users use the users table space

  2) Paragraph

A segment (Segment) is a logical storage structure of a specified type in a tablespace that consists of one or more extents, and the segment consumes and grows storage space.

  Boot segment (Bootstrap Segment) : Store the definition of a data dictionary table

  temporary segment (temporary Segment): Data that stores temporary tables for resumes during a table sort operation

  Rollback Segment (Rollback Segment) : Stores the position and value before modification

  Index Segment (index Segment) : stores all index data for the best query on a table

  Data Segment (Date Segment) : Stores all data in the table

  3) District

A zone (Extent) is a logical unit of database storage space allocation, a zone consists of a set of database blocks, the first area assigned by a segment is called the initial area, and the area that is assigned later is called the incremental area.

  4) Data block

a database block is the smallest I/O unit used by a database, also known as a logical block or an Oracle block . A database block corresponds to one or more physical blocks, and the size of the block is determined by the parameter db_block_size.
The size of the block is an integer multiple of the operating system block size.
Take Win2K as an example, the size of the OS block is 4KB, so the size of the Oracle block can be 4kb,8kb,16kb and so on.
If the block size is 4KB, the data for each row of a table is bytes. If a query statement returns only 1 rows of data, when the data is read into the data cache, the amount of data read is 4KB instead of bytes.
the data block is made up of five parts .
Title: Includes general block information, such as block address/segment type, and the best size is 85-100bytes.
Table Directory: Stores information about the tables in the aggregation that are used to aggregate segments.
Row directory: Includes valid row information in this block, allowing the use of 2bytes at the beginning of each line.
Free space: A set of spaces that can be inserted or modified in this block.
Row data: Stores data for a table or index.

The above content comes from the network!

Oracle Foundation Oracle Architecture

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.