Two processes that are easily overlooked in Oracle databases

Source: Internet
Author: User

There are two very small processes in the Oracle database, but their role is very huge. Because it is relatively small, it is often ignored by the database administrator. This time I will remind you not to ignore the role of these two processes. If it is well utilized, it will reduce the workload of the system administrator.
1. The system monitors the SMON process.

This process can be summarized in one sentence for the Oracle database, that is, it is a huge crowd. It is not responsible for a lot of content, but it plays a key role in data security and database performance. For example, as data in a tablespace is constantly created, deleted, and updated, fragments are generated in the tablespace. Due to the existence of these fragments, the database performance will gradually decrease. Now the role of the system monitoring process SMON can solve these fragments. The SMON process combines the free fragments of each tablespace, making it easier for the database system to allocate shards. This improves the database performance. In addition, faults may occur due to power failure or other reasons during database operation. At this time, the dirty cache blocks in the data cache have not been written into the data file, resulting in data loss. When the database is started, the system monitoring process SMON will automatically read the redo log file and restore the database during the next startup routine. That is to say, to write committed transactions to a data file (data that has been written to a log file but not written to a data file) and to roll back uncommitted transactions. It can be seen that the SMON process is a relatively small but very important role in the Oracle database.

When managing this process, I think we need to pay attention to two problems. First, the startup time. Generally, the system monitoring process is started when the routine restarts. Then, during the running of this routine, the process will also be regularly awakened by the system, and then it will check whether there is work to be done. Most importantly, when necessary, the database administrator can start the SMON system monitoring process through other processes to complete some specific work.

Second, pay attention to the impact of table space configuration on this process. In table space management, a parameter is PCTINCREASE. If this parameter is set to 0, the role of the SMON system monitoring process to this tablespace will be compromised. When it is set to 0, the SMON process will not sort and merge idle fragments in the tablespace. That is to say, the database administrator needs to export and import data to solve the tablespace fragmentation problem. Obviously, this will increase the workload of the database administrator. For this reason, I have set this parameter to 0 unless otherwise necessary. Let the SMON process automatically manage fragments in the table space and automatically merge idle fragments in the tablespace. However, if this parameter is set to 0 for a tablespace, it does not affect other purposes of the system monitoring process, for example, it does not affect data recovery when the routine is abnormally closed. That is, even if this parameter is set to 0, you can use the record in the redo log file to restore relevant data when necessary.
Ii. Archive process ARCH.

There are two modes for managing redo log files: Archive and non-archive. When switching logs, if you do not archive the original log files but directly overwrite them, it is called non-archive mode. On the contrary, when writing the next log file, the target log file will be archived first, which is called the archive mode. The archive process ARCH is responsible for copying the fully-filled redo log files to the archive log files after switching the redo log files to prevent them from being overwritten when the redo log files are repeatedly written. Therefore, the ARCH process is started only when the database is running in archive mode. In any medium operation mode, redo log files are recycled. Therefore, when the LGWR process uses the next log file for log switching, the database will be temporarily suspended for archiving the target log file. The database will not recover until the target redo log file is archived. Therefore, operations on archiving logs sometimes affect the database performance, especially when frequent mass data changes are required.

So what methods can be used to improve the efficiency of archiving jobs? The following are some suggestions for the database administrator.

First, you can increase the number of archiving processes. By default, only one archive process ARCH is started for a routine. When the ARCH process is archiving a redo log file, no other process can access this redo log file. If you are using an Oracle database, you can start multiple archive process named ARCH as needed. In the Oracle database, manual and automatic methods are used to start multiple archiving processes. To speed up archiving of redo log files, the LGWR process automatically starts multiple archiving processes based on the timing when user processes wait for a long time. A maximum of 10 archiving processes can be started in the Oracle database. In addition, if the database administrator estimates that the log archiving job will affect the database performance when deploying the database, you can manually start multiple archiving processes. This is determined by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. You can set this parameter to a value greater than 1 (Note that there cannot be more than 9 archiving processes ). In this case, the database starts multiple archiving processes when creating routines. However, I still prefer to allow the database system to automatically manage this process. It is best not to interfere with the database administrator. Note the difference between the number of archive ARCH processes and the number of DBWR processes. By default, there is only one DBWR process. To improve the database performance, you can increase the number of DBWR processes as needed. However, the increase is limited by the number of CPUs, that is, a DBWR process needs to use an independent CPU. To start three DBWR processes, you must use three CPU processors. There is no such restriction for the ARCH archiving process. Even if there is only one CPU processor, it can start three or more ARCH processes.

The second is to add the redo log file to prolong the startup interval of the archiving log process. Generally, the ARCH archiving log process is triggered only when the current redo log file is full and the log switch is required. Therefore, if the redo file is large, the log switching interval will be extended. The startup interval of the ARCH archiving log process is relatively long. Therefore, by adjusting the size of the redo log file, you can prolong the startup interval of the archiving process. This reduces the negative impact on database performance caused by startup of the archiving process.

Third, a large amount of data may need to be imported during database initialization. In this case, a large number of operations are performed on the data in the database, such as insertion, deletion, and update, resulting in frequent switching of redo log files. This causes the database to frequently start the ARCH archiving process. A large number of database update operations, redo log files (LGWR process), archive redo log file (ARCH) processes form an invisible chain. Because of the butterfly effect, the database performance is reduced. Therefore, when necessary, you need to cut down the chain to improve the database performance. For example, when a large amount of data is imported, updated, or deleted, the log file cannot be inserted, or the space for redo log files can be temporarily increased. In this case, when performing these operations, you can avoid switching the redo log or prolong the interval of switching the redo log. The ARCH archiving log process can avoid or prolong the time interval, thus improving the database performance. After the database Initialization is complete, restore it. Although these temporary adjustments are troublesome, they can improve the performance of the database. For this reason, I think this is worthwhile.

It can be seen that the role of the above two processes in the Oracle database is limited, but it is closely related to the database performance. In daily operations, flexible use of the characteristics of these two processes can improve the speed of some operations. This is much easier than optimizing SQL statements to improve database performance. For this reason, I suggest you database administrators. Although these two processes are small, their functions cannot be ignored. The database administrator should pay attention to these two processes.
 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.