PostgreSQL provides some functions to help improve performance. There are several main aspects. 1. Use the explain command to view the execution plan. In the previous blog
PostgreSQL provides some functions to help improve performance. There are several main aspects. 1. Run the EXPLAIN command to view the execution plan. In the previous blog
PostgreSQL provides some functions to help improve performance. There are several main aspects.
1. Use EXPLAIN
The EXPLAIN command can view the execution plan, which has been introduced in the previous blog. This method is our most important debugging tool.
2. update statistics used in the execution plan in a timely manner
Statistics are not updated every time you operate on the database. Statistics are generally updated during DDL execution such as VACUUM, ANALYZE, and create index,
Therefore, the statistical information used for row planning may be relatively old. In this way, the result of the analysis may become larger by mistake.
The following table lists some statistics related to tenk1.
SELECT relname, relkind, reltuples, relpages
FROM pg_class
WHERE relname LIKE 'tenk1% ';
Relname | relkind | reltuples | relpages
---------------------- + --------- + ----------- + ----------
Tenk1 | r | 10000 | 358
Tenkw.hundred | I | 10000 | 30
Tenk?thous_tenthous | I | 10000 | 30
Tenk1_unique1 | I | 10000 | 30
Tenk1_unique2 | I | 10000 | 30
(5 rows)
Relkind is the type, r is the table, I is the index, reltuples is the number of projects, and relpages is the number of hard disk blocks.
3. explicitly use join to join a table
General Syntax: SELECT * FROM a, B, c WHERE a. id = B. id AND B. ref = c. id;
If join is used explicitly, the execution plan is relatively easy to control during execution.
Example:
SELECT * FROM a cross join B cross join c WHERE a. id = B. id AND B. ref = c. id;
SELECT * FROM a JOIN (B JOIN c ON (B. ref = c. id) ON (a. id = B. id );
4. Disable Automatic submission (autocommit = false)
5. Using the copy command to insert data multiple times is more efficient
In some of our processes, we need to execute many insert operations on the same table. At this time, we use the copy command to make it more efficient. Because the insert operation is performed once, the related indexes are all performed once, which takes time to compare.
6. Temporarily Delete the index
Sometimes it takes several hours to back up and re-import data if the data volume is large. In this case, you can delete the index first. Import the index under creation.
7. Deletion of foreign key Association
If the table has foreign keys, the foreign keys are not checked during each operation. Therefore, it is slow. Creating a foreign key after data import is also an option.
8. Increase the maintenance_work_mem parameter size.
This parameter can improve the execution efficiency of create index and alter table add foreign key.
9. increase the size of the checkpoint_segments parameter.
Adding this parameter can speed up the import of a large amount of data.
10. Setting archive_mode is invalid.
When this parameter is set to invalid, the following operations can be improved:
CREATE TABLE AS SELECT
CREATE INDEX
ALTER TABLE SET TABLESPACE
CLUSTER.
11. Run vacuum analyze.
Vacuum analyze is recommended when the data in the table is greatly changed.