Article 1 Overview of Oracle Architecture)

Source: Internet
Author: User
Tags dedicated server

Article 1 Oracle Architecture Overview

Let's take a look at a picture.


 
This figure shows the architecture of Oracle 9i. It looks complicated. Yes, yes. Now let's take a look:
I. Databases, tablespaces, and data files
1. Database

A database is a collection of data. Oracle is a database management system and a relational database management system.
Generally, the "Database" refers to not only physical data sets, but also physical data and database management systems. That is, the combination of physical data, memory, and operating system processes.

The database data is stored in the table. The relationship between data is defined by columns, that is, the field we usually talk about. Each column has a column name. Data is stored in a table in the form of rows (usually called records. Tables can be associated with each other. The above is a simple description of the relational model database.

Of course, Oracle also provides the most powerful support for object-oriented structured databases. objects can be related to other objects or contain other objects. For oo databases, we will discuss them in detail later. Generally, our discussions are based on Relational Models.

2. tablespace and files
Regardless of the relational structure or oo structure, Oracle databases store their data in files. The database structure provides logical ing of data files, allowing separate storage of different types of data. These logical partitions are called tablespaces.

A table space is the logical division of a database. Each database has at least one table space (called a system table space ). To facilitate management and improve operation efficiency, some additional table spaces can be used to divide users and applications. For example, the user tablespace is used by general users, and the RBS tablespace is used by rollback segments. A tablespace can belong to only one database.

Each tablespace consists of one or more files on the same disk. These files are called data files ). A data file can belong to only one tablespace. After oracle7.2, the data file size can be changed during creation. To create a new tablespace, you must create a new data file.

Once a data file is added to a tablespace, it cannot be moved from the tablespace or be associated with other tablespaces.

If the database is stored in multiple tablespaces, You can physically separate their data files on different disks. In the method of planning and coordinating database I/O requests, the preceding data segmentation is an important method. The relationships between databases, tablespaces, and files are shown in:
 

Ii. database instances
To access data in the database, Oracle uses a group of background processes shared by all users. In addition, there are some storage structures (collectively referred to as system gloabl area, that is, SGA) used to store the data recently queried from the database. The data block cache area and the SQL shared SQL pool are the largest parts of SGA, which generally accounts for more than 95% of the SGA memory. By reducing the number of I/O operations on data files, these storage areas can improve database performance.

Database instances, also known as servers, are a collection of storage structures and background processes used to access database file sets. A database can be accessed by multiple instances (this is an Oracle Parallel Server option ). Shows the relationship between an instance and a database:
 
The init. ora file (spfile in 9i) that stores the instance size and composition parameters ). This file needs to be read when the instance starts and can be modified by the database administrator during running. Any modification to this file will only start at the next startup. The init. ora file of an instance usually contains the Instance name: if an instance is named orcl, The init. ora file is usually named initorcl. ora. Another configuration file config. ora is used to store variable values that will not be changed after the database is created (such as the database block size ). The config. ora file of the Instance usually contains the Instance name. If the Instance name is orcl, config. ora is usually named configorcl. ora. To facilitate the setting of the config. ora file, in the init. ora file of the instance, the file must be listed using the ifile parameter as the inclusion file.
-----------------------------------------
Note: The initialization parameter file will be further described in detail.
-----------------------------------------

Based on the above introduction to databases and instances, the Oracle database structure can be divided into three categories:
Internal database structure (such as table)
Structure of the storage area (including the shared storage area and process)
External structure of the database

Iii. internal structure of the database
The logical presentation layer of ORACLE data, also known as Oracle schema, includes the following:
Tables, columns, constraints, and data types (including abstract data types)
Partitions and subpartitions
Users and modes
Index, cluster, and hash Cluster
View
Sequence
Processes, functions, software packages, and triggers
Synonym
Permissions and Roles
Database Link
Segment, Disk Area, and block
Rollback segment
Snapshots and graphics
The detailed introduction of each part will be discussed in the oralce schema section.

Iv. Oracle Internal Storage Structure
Including the memory buffer pool and background processes:
1. system global zone (SGA), mainly including:
A. data block cache Zone

Data block buffer is a high-speed cache area in S g a. It is used to store data blocks (such as tables, indexes, and clusters) that read data segments from the database ). The size of the data block cache is determined by the db_lock_buffers parameter in the database server I n I t. o r a file (expressed by the number of database blocks ). When adjusting and managing databases, adjusting the size of the data block cache is an important part.

Because the size of the data block cache is fixed, and its size is usually smaller than the space used by the database segment, it cannot load all the database segments in the memory at a time. Generally, the data block cache is only 1%-of the database size ~ 2%, o r a C l e uses the least recently used (L r u, least recently used) algorithm to manage available space. When the storage area requires free space, the minimum block used recently will be removed, and the new data block will be replaced in the storage area. This method saves the most frequently used data in the storage area.

However, if the size of s g a is insufficient to accommodate all the most commonly used data, different objects will compete for the space in the data block cache. This is likely to happen when multiple applications share the same s g. In this case, the most recent use segments of each application compete for space in s g a with those of other applications. The result is that data requests to the data block cache area will have a low hit rate, leading to a reduction in system performance.

B. dictionary cache area
The database object information is stored in the data dictionary table. The information includes the user account data, data file name, segment name, disk location, table description, and permissions, when the database needs this information (such as checking the authorization of a table to be queried by the user), it reads the data dictionary table and stores the returned data in s g a in the dictionary cache.

The data dictionary cache area is managed by least recently used (LRU) algorithms. The size of the dictionary cache is managed by the database. The dictionary cache is part of the S q l shared pool. The size of the Shared Pool is determined by the database file I n I t. set the s h a r e d_po o l_s I Z E parameter in o r.

If the dictionary cache is too small, the database has to repeatedly query the data dictionary table to access the information required by the database. These queries are called recuesive calls ), the query speed is lower than that in the dictionary cache area when the query is completed independently.

C. Redo the log buffer.
The redo item description modifies the database. They are written to online redo log files to be used for forward scrolling during database recovery. However, before being written to the online redo log file, transactions are first recorded in s g a called redo log buff e r. The database can periodically and in batches write the content of the redo item to the online redo log file to optimize this operation.

The size (in bytes) of the redo log buffer is determined by the l o g _ B u f e r parameter in the I n I t. o r a file.

D. SQL sharing pool
The S q l shared pool stores the data dictionary cache and library cache, that is, the statement information for database operations. When the data block buffer and dictionary cache can share the structure and data information between database users, the database cache can share frequently-used S q l statements.

The S q l sharing pool includes the execution plan and the syntax analysis tree of the S q l statement that runs the database. When you run the same S q l statement for the second time (by any user), you can use the available syntax analysis information in the S q l shared pool to speed up execution.

The S q l shared pool is managed by the L r u algorithm. When the S q l shared pool is filled up, the execution path and syntax analysis tree that are least recently used will be deleted from the database cache to free up space for new entries. If the share pool of S q l is too small, the statement will be continuously loaded into the database cache, thus affecting operation performance.

The size (in bytes) of the S q l shared pool is determined by I n I t. the parameter s h a r e d _ p o l _ s I Z E in the o r a file is determined.

E. large pool
The large pool (L a rge pool) is an optional memory area. If you use the thread server option or frequently perform backup/recovery operations, you only need to create a large pool to manage these operations more effectively. The large pool will be dedicated to supporting S q l large commands. By using a large pool, you can prevent these S q l large commands from overwriting entries into the S q l shared pool, thus reducing the number of statements that are then loaded into the database cache. The size (in bytes) of the large pool is measured by init. set the l a r g e _ p o l _ s I z e parameter in the ora file. You can use I n I t. the parameter l a r g e _ p o l _ m I n _ A l o C in the o r a file sets the minimum position in the large pool. O r a c l e 8 I does not need this parameter.

I n I t. o r a file's s h a r e d _ p o l _ r e s e RV e d _ s I Z E parameter is S q l large statement retains part of S q l shared pool.

F. Java pool
According to its name, the Java Pool provides syntax analysis for the j a v a command. The size (in bytes) of the Java pool is measured by I n I t introduced in o r a c l e 8 I. o r a file's j Ava _ p o l _ s I Z E parameter settings. The default value of the J Ava _ p o l _ s I z e parameter in the I n I t. o a file is 1 0 m B.

G. Multi-Buffer Pool
You can create multiple buffer pools in S g a. Multiple Buffer pools can be used to separate large datasets from other applications to reduce their possibility of competing for the same resources in the data block cache zone. For each created buffer pool, the size and quantity of the L r u latches must be specified. The number of buffers must be at least five times more than the number of L r u latches.

When creating a buffer pool, you must specify the size of the keep area and the recycle area. Like the retention area of the S q l shared pool, the retention area maintains entries, while the recycling area is frequently recycled. The size of the storage zone can be defined by the parameter B u F E R _ p o l _ k e p. For example:
The storage and recycling buffer pool capacity reduces the available space in the data block buffer storage area (set by the d B _ B l o c k _ B u F E R S parameter ). For tables that use a new buffer pool, specify the buffer pool name through the B u FF E R _ p o l parameter in the S t o r a G E clause of the table. For example, if you want to quickly delete a table from the memory, assign it to the r e c y c l e pool. The default pool is called d e fa u lt, so that a table can be transferred to d e fa u lt pool by using the alter table command later.

2. Global Program partition (p g ).
The global area of the Program (p g a, program global area) is an area in the storage area, which consists of an o r a C l e user process.
In use, the memory in p g a cannot be shared.

3. Environment zone

4. Background Process
The database has multiple background processes. The number of processes depends on the database configuration. These processes are managed by the database, and they only need to be managed rarely.
Each background process creates a trail file. Saves trace files during instance operations. The naming conventions and locations of background process trace files vary with the operating system and database versions. Generally, the trace file contains the background process name or the operating system process ID of the background process. You can set the background_dump_dest parameter of the initialization parameter file.
Specifies the location of the background process trace file, but some versions of o r a C l e ignore this setting. When a database fault is eliminated, tracking files are very important. Serious problems that affect background processes are usually recorded in database warning logs.
Warning logs are usually located in the background_dump_dest directory. In general, this directory is
The/admin/instance_name/bdump directory under the oracle_base directory.

A. SMON
When a database is started, the SMON (system monitor, System Monitoring Program) process executes the required instance recovery
Operation (using online redo log files), it can also clear the database and cancel the transaction objects that are no longer needed by the system.
Another purpose of S m o n is to block the adjacent free disk into a large free disk area. For some tablespaces, the database administrator must manually perform free space merge; s m o n only merges free space in the tablespace.
The storage value of p c t I n C R E A S E is non-zero.

B. pmon
P m o n (Process Monitoring Program) background processes clear the processes of failed users and release the resources that the users are currently using. When a lock-Holding Process is canceled, the effect is obvious. p m o n is responsible for releasing the lock and making it available to other users. Like S m o n, p m o n periodically wakes up and checks whether it needs to be used.

C. dbwr
D B W R (Database writing program) background processes are responsible for managing the data block cache and dictionary cache content. It writes the modified block from s g a to the data file in batches.

Although each database instance only has one s m o n and one p m o n process running, depending on the platform and operating system, you can have multiple d B W R processes at the same time. Using multiple d B w r processes helps reduce conflicts in d B W R during large operations. The number of d B w r processes required is determined by the I n I t of the database. in the o r a file, the parameter d B _ w R I t e r _ p r o C E S is determined. If the system supports asynchronous I/O, you can use multiple dbwr I/O to create a d B W R process from the (S L A V E) process. The number of dbwr I/O slave processes is set by the d B w r _ I/O _ s l av e s parameter in the I n I t. o r a file.

If multiple d B w r processes are created, these processes are not called d B W R and they have a digital component. For example, if you create five d B W R processes, the operating system name of the process may be d B w 0, d B w 1, d B w 2, d B w 3 and d B W 4.

D. lgwr
L g w r (log writing program) background processes are responsible for writing the online redo log buffer content into the online redo log file. L g W r batch writes log entries to online redo log files. Redo log buffer entries always contain the latest database status, because the d B W R process can wait until the modified data blocks in the data block buffer are written to the data file.

L g w r is a process in which the content written by log files is redone online and the content is directly read from the redo log buffer when the database is operating normally. Contrary to the random access performed by d B W R on data files, online redo log files are written in sequence. If the online redo log file is an image file, l g r writes content to the image log file at the same time.

For o r a c l e 8, multiple lgwr I/O slave processes can be created to improve the Write Performance of online redo log files. the parameter l g w r _ I o _ S L AV e s of the o r a file is determined.

In o r a c l e 8 I, this parameter is no longer available. lgwr I/O is derived from the set value of d B w r _ I o _ S L AV e s.
.

E. ckpt
The c k p t (Checkpoint Process) is used to reduce the time required for instance recovery. The checkpoint causes d B W R to write all the modified data blocks after the previous checkpoint to the data file, and update the data file header and control file to record the checkpoint.

When an online redo log file is filled up, the checkpoint process automatically appears. You can use the I n I t of the database instance. in the o r a file, set the parameter l o g _ c h e c k p o I n t _ I n t e RVA L to set a frequent checkpoint.

The c k p t background process divides the two functions of l g w r in earlier database versions (sending signals to check points and copying log Content) into two background processes. When the I n I t of the database instance. when the c h e c k p o I n t _ p r o C E S parameter in the o r a file is set to t r u e, you can create a c k p t background process.

F. Arch
L g w r background processes write data to online redo log files cyclically. When the first log file is filled, write data to the second log file. After the second log file is filled, write the log file to the third log file. Once the last redo log file is filled up, l g r begins to overwrite the content of the first redo log file.

When o r a C l e runs in a r c h I V E L O G (archive log) mode, the database backs up the o r a C l e Before rewriting the log file. These archived redo log files are usually written to a disk device. It can also be directly written into tape devices, but this often increases the operator's labor intensity.

This archiving function is completed by the background process of a r c h (archive process). databases with this performance will encounter redo log disk conflicts when processing big data transactions, this is because when l g w r is preparing to write a redo log file, a r c h is preparing to read another one. If the target disk of the archived log is full, the database will be locked. At this point, a r c h is frozen and l g r writing is prohibited. Further transaction processing is prohibited in the database. This situation continues until the space of the archived redo log file is cleared.

For o r a c l e 8, multiple arch I/O slave processes can be created to improve the write function for archived redo log files. In o r a c l e 8. 0, the number of arch I/O slave processes is determined by the I n I t of the database. the a r c h _ I o _ S L AV e s parameter in the o r a file is determined. In o r a c l e 8 I, this parameter is no longer available, the set value of a r c h _ I o _ s l av e s is derived from the set value of d B w r _ I o _ S L AV e s.

G. reco
Re c o background processes are used to solve faults in distributed databases. The r e c o process tries to access the databases of distributed transactions in question and parse these transactions. Only when the platform supports distributed option (Distributed option) and I n I t. o r a File d I s t r I B u t e d _ t r a n s a c t I o n s parameter is greater than zero to create this process.

H. snpn
O r a C l e snapshot refresh and internal job queue scheduling depend on the background processes (snapshot processes) they execute ). The names of these background processes start with the letter s n p and end with digits or letters. The number of s n p processes created for an instance is determined by the I n I t of the database. in the o r a file, the J o B _ q u E U E _ p r o C E S parameter is determined (in o r a C L E 7, this parameter is named S n a p s h o t _ r e r e s h _ p r o c e s ).

I. lckn
When the o r a C l e Parallel Server option is adopted, multiple l c k background processes (named l c k 0 ~ L c k 9) used to solve the internal instance lock problem. The number of c k processes is determined by the g c _ l c k _ p r o c s parameter.

J. DN n
D N (Scheduler process) is part of the m t s structure. These processes help reduce the resources required to process multiple connections. For each protocol supported by the database server, at least one scheduler process must be created. The scheduler process is based on s q l * n e t (or n e t 8) the configuration is created when the database is started. After the database is opened, it can be created or canceled.

K. Sn n
Create Sn N (server process) to manage the database connection of the dedicated server. Server processes can perform I/O operations on data files.

L. P n
If you start the parallel query option in the database, the resources required for one query can be distributed across multiple processors. When the instance is started by init. when the pa r a l e l _ m I n _ s e RV E R S parameter of the ora file is determined, the specified number of parallel query server processes are started. Every such process will appear at the operating system level. The more processes that require parallel operations, the more processes that start the parallel query server. Each parallel query server process has a name such as P 0 0 0, P 0 0 1, and P 0 0 2 at the operating system level. The maximum number of parallel query server processes is determined by the pa r a l e l _ m a X _ s e rv e r s parameter of the init. ora file.

V. External Structure of Oracle
1. redo log

O r a C l e stores logs of all database transactions. These transactions are recorded in online redo logf I l e. When the database is damaged, these log files can restore database transactions in the correct order. Redo log file information is stored outside the database data file.

Redo log files also allow o r a C l e to optimize data writing to the disk. When a transaction occurs in the database, the transaction is input to the redo log buffer. Data blocks affected by the transaction are not immediately written to the disk.

Each o r a C l e database has two or more online redo log files. O r a C l e is written cyclically to online redo log files: after the first log file is filled, it is written to the second log file, and so on. When all online redo log files are filled up, the system returns to the first log file and overwrites the new transaction data. If the database is running in a r c h I V E L O G mode, the database will back up the log file before overwriting the online redo log file. You can use these archive and redo log files to restore any part of the database at any time.

Redo log files can be copied by database images ). The online redo log file of the image can be mirrored without the hardware performance of the operating system or operating environment.

2. Control File
The global physical structure of a database is maintained by its control file. The control file records the control information of all files in the database. Control Files maintain internal consistency and guide recovery operations.

Since control files are critical to databases, multiple copies are stored online. These files are generally stored on different disks to minimize the potential risk of disk failure. When creating a database, the corresponding control file is also provided.

The Database Control File Name is specified by the c o n t r o l _ f I L E S parameter of the init. ora file. Although this is an I n I t. o r a parameter, but c o n to r l _ f I l e s Parameters usually use c o n f I g. the o r a file specifies that it rarely changes. To add a new control file to the database, close the instance and copy an existing control file to the new address, add the new address to the set value of the c o n t r o l _ f I L E S parameter and restart the instance.

3. Tracking files and warning logs
Every background process running in the instance has a trace file connected to it. The tracking file records the information of major events encountered by background processes. In addition to the tracking file, o r a C l e also has a file called the warning log (Alert Log), which indicates that the warning log records the commands and results of the main events in the database file running. For example, information such as tablespace creation, redo log conversion, operating system recovery, and database creation is recorded in the warning log. Warning logs are important resources managed by the database on a daily basis. tracing files are useful when you need to find the main cause of failure.

The warning logs should be monitored frequently. Warning log entries will notify you of any problems encountered during database operations, including any o r A _ 0 6 0 0 internal errors. To make warning logs easy to use, it is best to automatically rename them every day. For example, if the warning log is called a l e r t _ o r c l. l o g, you can rename it so that its file name includes the current date. When o r a C l e writes the warning log next time, it cannot find a l e r t _ o r c l. L o g file name, so the database will create a new file name. In this way, in addition to the previous warning logs, there is also a current warning log (a l e r t _ o r c l. l o g ). Using this method to differentiate warning log entries can make the analysis of warning log entries more effective.

---------------------------------- End --------------------------------------------
 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.