Oracle's physical architecture

Source: Internet
Author: User
Tags sessions sqlplus

Architecture Diagram Learning:

    1. The story of Lao Yu's clothing store
    2. Structure diagram:
      1. SQL query Statements
        1. Sga
          1. Shared pool
          2. Data buffer Buffer Cache
        2. Pga
        3. Process
      2. SQL UPDATE statement
        1. Sga:
          1. Log buffers
        2. Log file
    3. Deep learning
      1. Submit
      2. Rolling back
      3. Consistent read

Practical experience:

    1. Memory
    2. Process
    3. Start and stop
    4. File
    5. Listening

Learning Significance:

Architecture diagram:

Description

    1. Oracle consists of instances and databases
    2. The example consists of a specially-opened SGA memory area and some column background processes, where the SGA is divided into three parts, the shared pool, the data buffer (DB cache), and the log buffer. Plus some of the Oracle's processes: Dbwn,lgwr,arch,ckpt these have to do with hard drives. Other processes include: Pmon,smon,lckn,reco.
    3. The file composition of the database:
      1. The data itself:
        1. Data files:
        2. Log file:
          1. Online log file: Must be multiplexed (group-nmember); loop write
          2. Archived log files: Archived log files are moved to disk or tape for backup and recovery.
      2. To launch an instance:
        1. Parameter file:
        2. Control files:
    4. The PGA is also a block of memory, and the difference between the SGA is that the PGA private is not shared. The user is connected. The first pass is the PGA:
      1. Save the user's connection information: such as session properties, binding variables, and so on.
      2. Save the user's permission, when the user establishes the session, the system will go to the data dictionary to query the user's permission information, and saved in the PGA.
      3. If the retrieved data needs to be sorted, then the sort activity is done within the PGA, and if the PGA is not enough, then use a temporary tablespace (try not to use the temporary tablespace, as it will massively increase disk IO and degrade performance.) )。
    5. Divided into 3 areas, the memory is mainly 1 area pga,2 area SGA, hard disk is 3 zone. The general database operation is either 1-2 or just.

The story of Lao Yu:

    1. Size of customer-soft parse
    2. Effective adjustment of-buffer cache
    3. The habit of recording-the purpose is to improve performance, batch brush data instead of submitting a brush stroke.

An adventure of a query statement:

    The
    1. Sqlplus establishes a session that executes the query select ID from the where obejct_id=29; the first 2 steps in the PGA 3 step: Save session information, user rights information, The SQL statement is then hashed out with a hash value.
    2. with a hash value, SQL enters the second zone: the shared pool, where you first look for the same hash value. If not, then it is more difficult: the first query syntax is correct, and then query semantics (table and field is not written wrong) is correct, whether there are permissions, etc., confirm these, the hash value is stored. This is followed by hard parsing, and after optimizer analysis, Oracle chooses a low-cost execution plan.
    3. with confidently planned SQL next to the third zone: the data buffer, to get the data needed, if the data is found, return to the SQL back to the PGA. If not, it will take a lot of trouble to go to the disk (increase IO speed) inside the search. The way to find it is to follow the execution plan. The data read will be put back into the data buffer and PGA
       sqlplus xxx/xxxdrop table t;create table T as  select  * from All_objects;create index Idx_objid on T (object_id); Set Autotrace on;set linesize  100  ;set timing on;  select  object_name from T where Object_id=29 ;
      • Executes the same statement before and after two times: the second time is much less than the first. The reason for the second time is to do something less:
        1. Because the session is not broken, so there is no need to go to the database Read permissions and user information, a lot less physical reading.
        2. Because it is the second time to execute the same statement, so the SGA shared pool has saved the corresponding hash value, after the same, do not need to do syntax semantics validation, and do not need to do a hard parse to obtain an optimized execution plan, as long as the already resolved execution plan to use the good.
        3. The data is taken into the data buffer of the SGA and does not require a hard drive to read.
        4. You can use/*+full (t) */force SQL to walk full table scan.

An adventure of an UPDATE statement:

    1. If the user still does not exit the session, then the PGA portion of the saved content does not need to be done again. The SQL statement will still hash out a hash value.
    2. With a hash value, SQL enters the second zone: the shared pool. As usual: The first query syntax is correct, and then query semantics (table and field is not written wrong) is correct, whether there are permissions, etc., confirm these, the hash value is stored. This is followed by hard parsing, and after optimizer analysis, Oracle chooses a low-cost execution plan.
    3. The SQL with a confident plan goes next to the third area: the data buffer. Go to the disk in accordance with the execution plan to find, read out the data will be put back into the data buffer.
    4. SQL execution update changes the value of the data in the buffer.
      1. This change of action causes the data blocks in memory to become dirty. DBWN is specifically responsible for writing dirty data to disk.
      2. This change is logged to the log buffer by Oracle, and LGWR is quickly written to the online log file.
      3. If the online log file is full, the log file will be written with arch to the archive log file when switching logs. There is a speed problem, if there is an online log aarch process is not fully archived, there is a lgwr to write, indicating that the online log file is too small or too few files.
    5. Commit: Only user commit, this UPDATE statement will take effect in the database, if the user abandons, then use rollback.
    6. Log file and data file relationship: The purpose of the log is to restart the system after the crash, the log records, but did not brush the operation of the data file to do it again. So if usually dbwn not diligent brush data, the performance of the database is better, but at the same time the log does not brush the content of the corresponding also more, the longer the need for power-off recovery. However, if the usual dbwn very diligent, then is the time period of recovery, but will affect the performance of the database. To control this balance is ckpt.
    7. When data is written to disk by the data buffer, which is controlled by the ckpt (silly Bashi dbwn is to listen to Ckpt to command), in addition, if the LGWR out of trouble, then DBWN will be stopped, because dbwn must wait for LGWR to record the operation to the log to write the corresponding data to the disk, This is "everything has a record".

Selection of the workers: LGWR

  1. Pmon: This is the process monitoring process, if your SQL UPDATE statement process crashes, Pmon automatically help you rollback, other processes such as reco crashes, Pmon help restart the process, encounter LGWR crashes, Pmon to terminate the entire instance, to avoid inconsistent data.
  2. Smon: System monitoring process, unlike Pmon, Smon is concerned with system-level operations, not individual processes. The focus is on instance recovery, as well as cleaning up temporal and rollback segment table spaces, merging free space, and so on.
  3. LCKN: For RAC only, for blocking between instances, n from 1 to 10.
  4. RECO: For distributed recovery: for example, there are a,b,c three distributed databases, an application across these 3 databases, a commit must be in 3 libraries all committed successfully before the rollback.
  5. CKPT: By setting the Fast_start_mttr_target to control the frequency of Dbwn writes, the shorter the parameters, the shorter the recovery time, the more frequently the write, the worse the performance.
  6. DBWN: The core process of the system, is responsible for writing data to disk, subject to Ckpt command, and LGWR closely related.
  7. LGWR: is also the core process of the system, is responsible for the log cache log written to the disk log file. The log file must be recorded sequentially, with any error in the middle, and all subsequent logs will be discarded. So multiplexing is critical. In addition, because the log must be recorded sequentially, so LGWR can not use multi-process, only a single process, otherwise the order is confused. In this case, in order to ensure that the LGWR timely and effective write log files, lgwr himself to pressure, set 5 rules, through these rules from different sides to ensure the reliability of log files:
      1. Run every 3 seconds and write the log in the 3 seconds to disk.
      2. Any commit triggers the LGWR to run once and writes the contents of the commit to disk.
      3. Dbwn to write data from the cache to disk, trigger LGWR to run once and write the relevant content to disk.
      4. Log buffer full 1/3, or log full 1m, triggered LGWR run once, 1/3 of the content, or 1m of content written to disk.
      5. Online log file switchover, which triggers LGWR to run once, writes the log before the switchover to disk.
  8. Arch: The log switch will trigger the arch process, and the log will be overwritten if it is not archived before being overwritten.

Rollback segment: An UPDATE statement occurs.

    1. First find the record to be updated and read it into the data buffer (same as the query).
    2. In the rollback segment table space allocation space, creates a mirror of the record in the data buffer, writes the mirror to the rollback segment tablespace through DBWN, and logs the action.
    3. On the premise of 2, the original record with the mirror is modified and the modified action is written to the log.
    4. At this point, if the user commits, then LGWR to write the submission immediately. Also, this transaction is marked as inactive on the transaction table of the rollback segment, which means that overrides are allowed.
    5. If a rollback is made, Oracle will read the mirrored record from the rollback segment and restore the modified record. This process is also recorded by the log.
    6. Undo_management=auto: Automatically manage rollback segments
    7. The contents of the rollback segment can be overwritten 900 seconds after undo_retention=900:commit.

Change the pressure of the rollback section to the size of the problem:

    1. Minimum insert pressure. Log only the ROWID, roll back as long as the location to delete the row. The delete pressure is the largest and needs to be reinserted back into the entire line.
    2. The reverse is true for log files.

Read consistency: The rules of the query, if you start the query 9:00, then the database should return to 9:00 when the value of the database to you instead of 9:01. This is called read consistency.

    1. SCN full Name The system changes the number, which exists in the smallest unit block of the database. Changes in each block will cause the SCN to increment.
    2. The rollback segment of the database records the transaction slot, which is used to allocate the rollback segment space. If you update a block, the transaction is written into the slot, and if it is not committed or is not rolled back, the slot is active. The database reads this piece to identify this situation.
    3. When querying, if the SCN number is smaller than the originating SCN number, see if it is an active transaction, and if it is an active transaction, go back to the read data. If the SCN number is larger than the originating SCN number, then roll back the read data.
    4. Oracle returns errors rather than inconsistent reads. If the mirror of the rollback segment is overwritten by repeated modifications, Oracle returns a ORA-01555 error.
    5. The size of the ITL slot records the SCN number, and the corresponding undo table space can be found in the event.

Practice:

  1. Memory Experience: SGA/PGA:
    Sqlplus/As sysdbashow parameter sgashow parameter pgashow parameter shared_pool_sizeshow parameter db_cache_size
    Show Parameter Log_buffer
    Ipcs–m view shared memory on the operating system.
      1. Now the SGA is generally auto-managed, just set some general parameters: sga_maxsize=8g,sga_target=2g, the normal allocation of 2G to Oracle, if not enough to increase, the maximum is not more than 8G. The other 11g after more memory_max_target,memory_target. If you set this, you don't have to set up the SGA and the PGA. If you want to set the SGA manually, put the sga_target=0, then set shared_pool_size, Db_cache_size is not 0. The
      2. Log_buffer is not automatically assigned. Must be manual, the general 15m is enough. Because every 1/3 write once, every 1m write once, so the allocation of too much optimization is not too obvious.
      3. Modify Syntax: ALTER SYSTEM SET <parametername>=<value> scope={memory| spfile| BOTH}.
        1. If the instance is started with Pfile, the default value for scope is memory, and if SPFile is started, the default value is both.
        2. Some parameters must be restarted to take effect, such as log_buffer,sga_max_size. Error: ora-02095:specified initialization parameter cannot be modified.
        3. solution that cannot be started if the parameter file is problematic:
          1. >create pfile= '/tmp/swat.ora ' from SPFile;
          2. go to the operating system to modify/tmp/swat.ora.
          3. >create spfile from pfile= '/tmp/swat.ora ';
          4. >startup
  2. experience with the process:
     show parameter instance_name  ps -ef | grep   arcarchive log List;shutdown immediatestartup  mount  ;alter database archivelog;alter database open;  
  3. Start-Stop experience:
     startup Nomount: reads the parameter file and launches the instance by specifying the parameter file. Startup  mount  : Locks the control file. Control files: Record data files, log files, check point information. Startup:alter database  mount  ;alter Database open, by controlling the information inside the file to compare the state of the databases, if the state is consistent, open the database smoothly.  
     shutdown immediate: Ends all sessions and rolls back uncommitted operations. Shutdown Abort: Kill all sessions directly, no rollback, data inconsistencies will occur after the database restarts to Reover. Shutdown transactional: A session that has no transaction is killed, and a transaction commits to commit. No rollback operation is required. Shutdown Normal: New session cannot be connected, it is very polite to wait for all sessions to exit before shutting down. 
  4. The experience of the document:
    Show parameter SPFile parameter file location show parameter control file location
    Set Linesize 1000
    Col file_name format A100
    Col member format A100 Select file_name from dba_data_files; Data File location Select  $show parameter dump log warning file location ls -lart alert*
  5. Listening: Lsnrctl status | Lsnrctl Stop | Lsnrctl start

Oracle's physical architecture

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.