PostgreSQL tutorial (10): Performance Improvement tips and postgresql Performance Improvement

Source: Internet
Author: User
Tags time in milliseconds

PostgreSQL tutorial (10): Performance Improvement tips and postgresql Performance Improvement

1. Use EXPLAIN:

PostgreSQL generates a query plan for each query, because selecting the correct query path has a critical impact on performance. PostgreSQL itself contains a scheduler for optimal planning. We can use the EXPLAIN command to view the scheduler's query plan generated for each query.
The query plan generated by PostgreSQL is a planning tree consisting of 1 to n planning nodes. The bottom-layer node is the table scan node, which is used to return the retrieved data rows from the data table. However, different scan node types represent different table access modes, such as Sequential Scan, index scan, and bitmap index scan. If the query still requires connection, aggregation, sorting, or other operations on the original row, there will be other additional nodes on the scan node. These operations usually have multiple methods, so there may be different node types in these locations. EXPLAIN outputs a line of information for each node in the planning tree, showing the basic node type and estimated overhead value calculated by the Scheduler for executing the planning node. The first line (the top node) is an estimate of the total execution overhead of the plan. This value is the number that the scheduler tries to minimize.
Here is a simple example:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1;
QUERY PLAN
-------------------------------------------------------------
Seq Scan on tenk1 (cost = 0. 00 .. 458.00 rows = 10000 width = 244)

The data referenced by EXPLAIN is:
1). Estimated startup overhead (the time consumed before the output scan starts, for example, the time consumed in a sorting node ).
2). Estimated total overhead.
3) Estimated number of rows output by the Planning node.
4). The expected average row width of the planning node (unit: bytes ).
Here, the cost (cost) is calculated based on the number of disk page access. For example, 1.0 indicates a sequential disk page read. The overhead of the Upper-layer node includes the overhead of all its subnodes. Here, the number of output rows (rows) is not the number of rows planned for node processing/scanning, usually less. Generally, the expected number of rows on the top is closer to the number of rows actually returned by the query.
Now we will execute the following system table-Based Query:
 Copy codeThe Code is as follows:
SELECT relpages, reltuples FROM pg_class WHERE relname = 'tenk1 ';
 
From the query results, we can see that the tenk1 table occupies 358 disk pages and 10000 records. However, to calculate the cost value, we still need to know another system parameter value.
 Copy codeThe Code is as follows:
Postgres = # show cpu_tuple_cost;
Cpu_tuple_cost
----------------
0.01
(1 row)
Cost = 358 (number of disk pages) + 10000 (number of rows) * 0.01 (cpu_tuple_cost system parameter value)

Next, let's look at a query plan with the WHERE condition.
Copy codeThe Code is as follows:
Explain select * FROM tenk1 WHERE unique1 <7000;

QUERY PLAN
------------------------------------------------------------
Seq Scan on tenk1 (cost = 0. 00 .. 483.00 rows = 7033 width = 244)
Filter: (unique1 <7000)

The EXPLAIN output shows that the WHERE clause is used as a "filter" application. This indicates that the planning node scans each row of data in the table and then determines whether the nodes meet the filter conditions, finally, only the number of rows that pass the filter condition are output. Because of the existence of the WHERE clause, the expected number of output rows is reduced. Even so, the scan will still access all 10000 rows of data, so the overhead is not actually reduced. In fact, it also increases some additional CPU overhead caused by data filtering.
The above data is only an expected number, and changes even after the ANALYZE command is executed every time, because the statistical data generated by ANALYZE is calculated by randomly extracted samples from the table.
If we set the conditions for the above query more strictly, we will get different query plans, such:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1 WHERE unique1 <100;

QUERY PLAN
------------------------------------------------------------------------------
Bitmap Heap Scan on tenk1 (cost = 2. 37 .. 232.35 rows = 106 width = 244)
Recheck Cond: (unique1 <100)
-> Bitmap Index Scan on tenk1_unique1 (cost = 0. 00 .. 2.37 rows = 106 width = 0)
Index Cond: (unique1 <100)

Here, the planner decides to use two-step planning. The inner planning node accesses an index to locate the row that matches the index conditions, and then the upper planning node reads the rows from the table. Reading data rows independently is much more costly than reading them sequentially, but because it is not used to access all disk pages of the table, therefore, the overhead of this method is still less than that of a sequential scan. The reason for using two-layer planning is that the upper-layer planning node sorts the physical locations of the rows retrieved through the index first, which can minimize the overhead of separately reading the disk page. The "bitmap" mentioned in the node name is the sorting mechanism.

Now we can set the WHERE condition more strictly, for example:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1 WHERE unique1 <3;

QUERY PLAN
------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost = 0. 00 .. 10.00 rows = 2 width = 244)
Index Cond: (unique1 <3)

In this SQL statement, the data rows in the table are read in the order of indexes, which will increase the overhead of reading them. However, in fact, the number of rows to be obtained is very small, therefore, there is no need to sort rows in physical locations.
Now we need to add another condition to the WHERE clause, for example:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1 WHERE unique1 <3 AND stringu1 = 'xxx ';

QUERY PLAN
------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost = 0. 00 .. 10.01 rows = 1 width = 244)
Index Cond: (unique1 <3)
Filter: (stringu1 = 'xxx': name)

The newly added filter condition stringu1 = 'xxx' only reduces the number of expected output rows, but does not reduce the actual overhead because we still need to access the same number of data rows. This condition is not regarded as an index condition, but is regarded as a filter condition for the index results.
If multiple fields in the WHERE condition have an index, the planner may use the combination of the index and or, for example:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1 WHERE unique1 <100 AND unique2> 9000;

QUERY PLAN
Bytes -------------------------------------------------------------------------------------
Bitmap Heap Scan on tenk1 (cost = 11. 27 .. 49.11 rows = 11 width = 244)
Recheck Cond: (unique1 <100) AND (unique2> 9000 ))
-> BitmapAnd (cost = 11. 27 .. 11.27 rows = 11 width = 0)
-> Bitmap Index Scan on tenk1_unique1 (cost = 0. 00 .. 2.37 rows = 106 width = 0)
Index Cond: (unique1 <100)
-> Bitmap Index Scan on tenk1_unique2 (cost = 0. 00 .. 8.65 rows = 1042 width = 0)
Index Cond: (unique2> 9000)

This result will lead to access to two indexes. This method may not be better than using only one index and using another condition as a filter.
Now let's take a look at the query plan for table join Based on index fields, such:
 Copy codeThe Code is as follows:
Explain select * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 <100 AND t1.unique2 = t2.unique2;

QUERY PLAN
Bytes --------------------------------------------------------------------------------------
Nested Loop (cost = 2. 37 .. 553.11 rows = 106 width = 488)
-> Bitmap Heap Scan on tenk1 t1 (cost = 2. 37 .. 232.35 rows = 106 width = 244)
Recheck Cond: (unique1 <100)
-> Bitmap Index Scan on tenk1_unique1 (cost = 0. 00 .. 2.37 rows = 106 width = 0)
Index Cond: (unique1 <100)
-> Index Scan using tenk2_unique2 on tenk2 t2 (cost = 0. 00 .. 3.01 rows = 1 width = 244)
Index Cond: ("outer". unique2 = t2.unique2)

From the query plan, we can see that the query statement uses a Nested Loop. The outer scan is a bitmap index, so its overhead is the same as the overhead of the row count and the previous query, because the condition unique1 <100 plays a role. At this time, the t1.unique2 = t2.unique2 Condition Clause has no effect, so it does not affect the row count of the outer scan. However, for the inner layer scan, the data of the current outer layer scan will be inserted into the inner layer index scan, and a similar condition t2.unique2 = constant will be generated. Therefore, the inner layer scan will get the same plan and overhead as the explain select * FROM tenk2 WHERE unique2 = 42. Finally, the overhead of external layer scanning is the overhead of the cyclic nodes, plus an iteration of each outer row (106*3.01), and a little CPU time required for connection processing.
If you do not want to use nested loops to plan the preceding query, you can disable nested loops by executing the following system settings, for example:
 Copy codeThe Code is as follows:
SET enable_nestloop = off;
Explain select * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 <100 AND t1.unique2 = t2.unique2;

QUERY PLAN
Bytes ------------------------------------------------------------------------------------------
Hash Join (cost = 232. 61 .. 741.67 rows = 106 width = 488)
Hash Cond: ("outer". unique2 = "inner". unique2)
-> Seq Scan on tenk2 t2 (cost = 0. 00 .. 458.00 rows = 10000 width = 244)
-> Hash (cost = 232. 35 .. 232.35 rows = 106 width = 244)
-> Bitmap Heap Scan on tenk1 t1 (cost = 2. 37 .. 232.35 rows = 106 width = 244)
Recheck Cond: (unique1 <100)
-> Bitmap Index Scan on tenk1_unique1 (cost = 0. 00 .. 2.37 rows = 106 width = 0)
Index Cond: (unique1 <100)

This plan still tries to use the same index to scan 100 rows that meet the requirements from tenk1 and store them in the hash table in the memory, then, perform a full table Sequential Scan on tenk2 and query the hash table for each record in tenk2 to find rows that may match t1.unique2 = t2.unique2. Reading tenk1 and creating a hash are all startup overhead of this hash join, because it is impossible to obtain any output row before starting to read tenk2.

In addition, we can use the explain analyze command to check the accuracy of the scheduler's estimated value. This command First executes the query, then displays the actual running time of each planning node, and the estimated overhead displayed by the simple EXPLAIN command, such:
 Copy codeThe Code is as follows:
Explain analyze select * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 <100 AND t1.unique2 = t2.unique2;
QUERY PLAN
Certificate ----------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost = 2. 37 .. 553.11 rows = 106 width = 488) (actual time = 1. 392 .. 12.700 rows = 100 loops = 1)
-> Bitmap Heap Scan on tenk1 t1 (cost = 2. 37 .. 232.35 rows = 106 width = 244) (actual time = 0. 878 .. 2.367 rows = 100 loops = 1)
Recheck Cond: (unique1 <100)
-> Bitmap Index Scan on tenk1_unique1 (cost = 0. 00 .. 2.37 rows = 106 width = 0) (actual time = 0. 546 .. 0.546 rows = 100 loops = 1)
Index Cond: (unique1 <100)
-> Index Scan using tenk2_unique2 on tenk2 t2 (cost = 0. 00 .. 3.01 rows = 1 width = 244) (actual time = 0. 067 .. 0.078 rows = 1 loops = 100)
Index Cond: ("outer". unique2 = t2.unique2)
Total runtime: 14.452 MS

Note that the "actual time" value is calculated based on the actual time in milliseconds, while the "cost" estimated value is calculated based on the number of disk page reads, so they are likely to be inconsistent. However, we only need to pay attention to whether the ratios of the two groups of data are consistent.

In some query plans, a subplanning node is likely to run multiple times. For example, in the previous nested loop plan, the inner layer index scan runs once for each outer row. In this case, "loops" will report the total number of executions of the node, and the actual displayed time and number of rows are the average value of each execution. The reason for this is that these real values are more comparable to the expected overhead values. To obtain the total time spent on the node, multiply the value by the value of "loops.
The "Total runtime" displayed by explain analyze includes the start and end times of the executor and the time when the result line is processed, but does not include the analysis, rewriting, or planned time.
If the EXPLAIN command can only be used in the test environment, but not in the real environment, it will be useless. For example, if an EXPLAIN statement is executed on a table with less data, it cannot be applied to a large number of tables because the overhead of the scheduler is not linear, therefore, it is possible to select different plans for larger or smaller tables. An extreme example is a table that occupies only one disk page. In such a table, you almost always get a Sequential Scan plan, no matter whether it has an index or not. The scheduler knows that it needs to read a disk page no matter under any circumstances, so it is meaningless to add several more disk pages to read the index.

Ii. Batch Data insertion:

You can use the following methods to optimize batch data insertion.

1. Disable Automatic submission:

When data is inserted in batches, if each piece of data is automatically submitted, when a system failure occurs in the middle of the process, the data consistency of this batch insertion cannot be guaranteed, in addition, the overall insertion efficiency will be greatly affected due to multiple commit operations. The solution is to disable automatic submission of the system, and execute the begin transaction command displayed before the insertion starts. After all the insert operations are complete, execute the commit command to submit all insert operations.

2. Use COPY:

Use COPY to load all records in a command, instead of a series of INSERT commands. The COPY command is optimized for loading a large number of data rows. It is not as flexible as the INSERT command, but the system overhead is much less when loading a large amount of data. Because COPY is a single command, it is unnecessary to disable automatic submission when filling the table.

3. delete an index:

If you are loading a newly created table, the fastest way is to create a table, load it in batches using COPY, and then create any indexes required for the table. Because creating an index on an existing table is faster than creating a row-by-row maintenance. Of course, during the absence of an index, the performance of other query operations on the table will be affected, and the uniqueness constraint may also be damaged.

4. Delete foreign key constraints:
As with indexes, "batch" checks foreign key constraints are more efficient than row checks. Therefore, we can first Delete the foreign key constraint, load the data, and then recreate the constraint.

5. Increase maintenance_work_mem:
When loading a large amount of data, temporarily increasing the value of the maintenance_work_mem system variable can improve the performance. This system parameter can improve the execution efficiency of the create index Command and the alter table add foreign key command, but it does not have much impact on the COPY operation itself.

6. Increase checkpoint_segments:
Temporarily increasing the value of the checkpoint_segments system variable can also improve the efficiency of loading a large amount of data. This is because when a large amount of data is loaded to PostgreSQL, the checkpoint operation (declared by the system variable checkpoint_timeout) will occur more frequently than usual. During each checkpoint, all dirty data must be flushed to the disk. You can effectively reduce the number of checkpoints by increasing the value of the checkpoint_segments variable.

7. Run ANALYZE afterwards:
After a large amount of data is added or updated, run the ANALYZE command immediately to ensure that the scheduler obtains the latest statistics based on the table. In other words, if there is no statistical data or the statistical data is too outdated, the planner may choose a poor query plan, resulting in a low query efficiency.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.