PostgreSQL autovacuum System Automatic cleanup process

Source: Internet
Author: User
Tags postgresql

Source: 50426692

"PostgreSQL Database kernel analysis," the 2nd Chapter PostgreSQL Architecture, this chapter from the macro on the control and processing of PostgreSQL a brief introduction, explains how each module is working together, so that the entire database system can be stable, Handles the user's various actions and requests correctly. As to how each module functions and how it works, it will be introduced specifically in the following chapters. This section introduces the Autovacuum system automatic cleanup process.

Ad:51cto Net + the first App innovation contest hot Start-----------super million resources for you to take!

2.5.5 autovacuum System Automatic cleanup process

In the PostgreSQL database, an update or delete operation on a table tuple did not immediately delete the old version of the data, and the old tuples in the table were only identified as deleted and did not immediately free up space. This processing is necessary for obtaining multiple versions of concurrency control, and if the version of a tuple is still likely to be seen by other transactions, then that version of the tuple cannot be deleted. When a transaction commits, the expired tuple version will no longer be valid for the transaction, and the space it occupies must be recycled for use by other new tuples to avoid the endless need for disk space growth, where cleanup of the database is done by running vacuum. Starting with PostgreSQL 8.1, the PostgreSQL database introduces an additional optional worker process Autovacuum (System automatic cleanup process), automates the vacuum and analyze commands, reclaims the space identified as the deletion status record, and updates the table statistics.

In the PostgreSQL database system configuration file, the main related parameters related to system automatic cleanup are as follows:

Autovacuum: Whether to start the system automatic cleanup function, the default value is on.

Autovacuum_max_workers: Sets the maximum number of system auto cleanup worker processes.

Autovacuum_naptime: Sets the interval between two system auto-cleanup operations.

Autovacuum_vacuum_threshold and Autovacuum_analyze_threshold: Sets the vacuum and analyze that are required when the threshold for the number of tuples updated on the table exceeds these thresholds.

Autovacuum_vacuum_scale_factor and Autovacuum_analyze_scale_factor: Sets the scale factor for the table size.

Autovacuum_freeze_max_age: Sets the XID upper value that you want to enforce for cleanup of the database.

The Autovacuum system automatic cleanup process contains two different processing processes: Autovacuum launcher and Autovacuum Worker. The Autovacuum launcher process is a monitoring process for collecting database run information, selecting a database based on database selection rules, and scheduling a autovacuum worker process to perform cleanup operations. In the Autovacuum launcher process, the rules for selecting a database are as follows: first, because the database transaction XID is a 32-bit integer and is incrementally allocated, when the maximum value is exceeded, the count is used from the beginning, and the size of the transaction XID represents the time at which the transaction began. Transaction XID re-count is used to cause some transaction data in the database to be lost, so when XID exceeds the configured Autovacuum_freeze_max_age, the database is forced to clean up and the transaction XID is updated, and second, if there is no forced cleanup operation, Select the database in the database list that has not performed the first automatic cleanup operation. The launcher process selects the database periodically (or is signaled) and dispatches the worker process to perform cleanup work.

The autovacuum worker process in Autovacuum performs the actual cleanup task, and the list of worker processes is maintained in the launcher process. The worker process list consists of three process lists of different states, that is, the list of idle worker processes, the worker process that is being started, and the list of worker processes that are running. The launcher process is switched between different states to realize the worker process scheduling. First, in the run memory context created during the initialization phase, a list of idle worker process description information is created with a length of autovacuum_max_workers, while the list of worker processes being started and running worker processes is set to empty. If a worker process is required by the launcher process, the idle worker process list is not empty and no worker processes are currently starting, a start worker process is started, which sends a start message to the postmaster process. Extracts a process description from the list of idle worker processes and sets the status to start. In the launcher process, only a worker process that is in a startup state is allowed, and the worker process that starts up will be canceled and restart the worker process cycle if the time-out (timeout is set by Autovacuum_naptime). If the worker starts successfully, the successful worker process information is added to the list of worker processes that are running. The worker process that is running is the connection on the database that is selected according to the rule.

After the successful worker process is successfully connected to the database, the tables in that database are traversed, and the tables to be executed and the actions performed on the table are selected based on the cleanup rules for the tables. The operation of the table is divided into vacuum and analyze two kinds, on the selected table if the number of expired tuples after the last vacuum has exceeded the purge threshold (vacuum threshold), then clean the table, the cleanup threshold is defined as:

Cleanup threshold = Cleanup Base threshold + cleanup scaling factor * tuple count

The basic threshold for cleanup here is autovacuum_vacuum_threshold, and the scaling factor for cleanup is autovacuum_vacuum_scale_factor, and the number of tuples can be obtained from the statistics collector. This is a partially accurate count, updated by each update and delete operation.

If, after the table was last analyze operation, where the number of expired tuples exceeds the analysis threshold (analyze threshold), the Table Update table statistics is analyzed, and the definition of the analysis threshold is similar to the cleanup threshold, as defined below:

Analysis threshold = Analysis base threshold + analysis scaling factor * number of tuples

The default thresholds and scaling coefficients are obtained from the postgresql.conf. However, we can override it in a separate setting for each table by entering information in the system table pg_autovacuum. A tuple in the Pg_autovacuum table can be used to record a table and its cleanup settings that require automatic cleanup, and the autovacuum process will use the cleanup settings in it to clean up the table. If you do not specifically set the cleanup settings for this table, Autovacuum will use the global settings.

1.AutoVacuum Launcher Process

Typically, the fork operation in the Ingress function Startautovaclauncher creates a postmaster child process Autovacuum Launcher, and the network connection port that is copied from the postmaster process is closed in the newly created child process execution body , and at the same time enters the process autovacuum launcher the execution body function autovaclaunchermain, its processing flow 2-12 is shown.

Figure 2-12 autovacuum Launcher process

Here are a few important steps to explain:

1) Build Database list: Call function Rebuild_database_list complete with the following steps:

① establishes a hash table in which each element represents a database that records the OID (Adl_datid) of the database, the timestamp of the startup worker (Adl_next_worker), and a scoring value (Adl_score). Initially, there are no elements in the hash table.

② the database in the Pg_database flat file (in the Pgdata/global directory) into a linked list, each node in the list represents a database, including the OID of the database, its name, the database's statistics, and so on.

③ calls Pgstat_fetch_stat_dbentry to populate the statistics for each node.

④ a database that is not empty for each statistic, searches for the database in the hash table, adds the database to the hash table if it is not found, and sets the adl_score of the database to the order number of the database when it is added to the hash table.

⑤ the database in the hash table in ascending order of Adl_score values into the linked list that the global variable databaselist points to, and sets the Adl_next_worker value for each database. The Adl_next_worker value of the first database is set to the current time, and the value of Adl_next_worker for each database is increased millis_increment than the previous one. The value of Millis_increment is set by dividing the value of the Autovacuum_naptime parameter by the number of databases in the hash table.

Why use flat files?

In the data set cluster of PostgreSQL, two flat files are available: Pg_database and Pg_auth. These two files record some of the information in the Pg_database system table and the Pg_authid system table, respectively. If some background processes that have not been started are required to access the contents of the two system tables, they will use two flat files to obtain information, because the process is not fully started when it is unable to connect to the database and read the relevant system table contents.

In the "Build Database list" Step because you are not connected to a database, you can only use flat files instead of system table pg_database.

2) Set the process sleep time: Calculates the sleep time according to the Idle Worker and database list, and sets a longer sleep time when all worker processes are running. When the worker process exits, hibernation can be awakened, while hibernation can be interrupted by other signals.

3) The GOT_SIGUSR1 signal in the Signal processing branch notifies the worker process to quit or postmaster notifies that a worker has failed to start. If the postmaster notifies the worker that the start of a failure is unsuccessful, the message to postmaster initiates the worker process is resent.

4) Start worker process: If a worker is currently starting, then hibernate for a while to wait for the worker to start. If you can start a new worker, make the following decision:

① If the database list is not empty, check the Adl_next_worker parameter of the Databaselist tail database and start the worker process if it is earlier than the current time (indicating that the database should be processed long ago).

When the ② database list is empty, the worker process is started immediately.

2.AutoVacuum worker Process

The entry for the Autovacuum worker process is the Launch_worker function, which calls Do_start_worker to create the worker process at that entry and returns the OID of the connection database. If the OID that is returned is a valid database OID, the database list is traversed to locate the corresponding database in the database list for that OID node, update the node's Adl_next_worker domain value, and move the node to the head of the database list. If traversing the database list does not have a node corresponding to the OID, call Rebuild_database_list to rebuild the database list. The function body that creates the worker process is shown in do_start_worker process 2-13.

Figure 2-13 autovacuum worker Process processing process

The process flow of the Autovacuum worker process is similar to that of the Autovacuum launcher process, and the rules for selecting the database to be cleaned are as follows: When the database is selected, the tables in the database are traversed, the cleanup thresholds and the analysis thresholds are calculated based on the statistics of the tables, To determine whether you want to perform the appropriate action on the table.

While the system is automatically cleaned up, the user can manually clean up the data using the Vacuumdb or Vacuumlo tools under the installation directory Bin folder. The Vacuumdb tool cleans up the database and performs parsing operations on the database, and the Vacuumlo tool cleans up large objects that are not valid in the database.

PostgreSQL autovacuum System Automatic cleanup process

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.