Modify one line of code to promote Postgres performance 100 times times _postgresql

Source: Internet
Author: User
Tags cpu usage datadog

In a (bad) PostgreSQL query, just a little to change (any (array[...)) to any (VALUES (...) ) will be able to reduce the query time from 20s to 0.2s. From the simplest learning to use EXPLAIN ANALYZE , to learn to use the Postgres community A lot of learning time input will have a hundredfold time to return.

using Postgres to monitor slow Postgres queries

Earlier this week, a primary key query for the small table (10gb,1500) on our graphics editor had a large query performance problem on one of our databases.

99.9% to queries are very fast and fluent, but in some places with a large number of enumeration values, these queries can take 20 seconds. Spending so much time on the database means that the user must wait in front of the browser for a response from the graphics editor. Obviously only because this 0.01% will cause very bad to affect.

Query and query plans

Here's the problem query.

Copy Code code as follows:

SELECT C.key,
C.x_key,
C.tags,
X.name
From the context C
JOIN x
On c.x_key = X.key
WHERE C.key = Any (array[15368196,-11,000 the other keys-)]
and C.x_key = 1
and c.tags @> array[e ' blah '];

Table x has thousands of rows of data, and table C has 15 million data. The primary key value "key" for both tables has an appropriate index. This is a very simple and clear primary key query. But interestingly, when you increase the number of primary key content, such as when the primary key has 11,000 values, by adding EXPLAIN (ANALYZE, buffers) to the query statement, we get the following query plan.

Copy Code code as follows:

Nested Loop (cost=6923.33..11770.59 rows=1 width=362) (actual time=17128.188..22109.283 rows=10858)
Buffers:shared hit=83494
-> Bitmap Heap Scan on Context C (cost=6923.33..11762.31 Rows=1 width=329) (actual time=17128.121..22031.783 rows=108 Loops=1)
Recheck Cond: (tags @> ' {blah} ':: text[]) and (X_key = 1))
Filter: (key = any (' {15368196, (a lot more keys here)} ':: integer[])
Buffers:shared hit=50919
-> Bitmapand (cost=6923.33..6923.33 rows=269 width=0) (actual time=132.910..132.910, rows=0 Loops=1)
Buffers:shared hit=1342
-> Bitmap Index Scan on Context_tags_idx (cost=0.00..1149.61 rows=15891 width=0) (actual time=64.614..64.614 rows=264 777 Loops=1)
Index Cond: (tags @> ' {blah} ':: text[])
Buffers:shared hit=401
-> Bitmap Index Scan on Context_x_id_source_type_id_idx (cost=0.00..5773.47 rows=268667 width=0) (actual time=54.648. .54.648 rows=267659 Loops=1)
Index Cond: (x_id = 1)
Buffers:shared hit=941
-> Index Scan using X_pkey on X (cost=0.00..8.27 Rows=1 width=37) (actual time=0.003..0.004 Rows=1 loops=10858)
Index Cond: (X.key = 1)
Buffers:shared hit=32575
Total runtime:22117.417 ms

At the bottom of the result, you can see that the query takes 22 seconds altogether. We can visually observe the 22-second cost through the following CPU usage chart. Most of the time is spent on Postgres and OS, and only a small portion is used for I/O.



At the lowest level, these queries look like spikes in these CPU utilization. CPU diagrams are rarely useful, but in this case it confirms the key point: The database does not wait for the disk to read the data. It's doing things like sort, hash and row comparisons.

The second interesting metric, which is the distance from these peaks, is the number of rows that the Postgres "gets" (no return in this case, see again).



Apparently some of the movements in the rules of the methodical browsing over many lines: our query.

The problem with Postgres: Bitmap scanning
The following is a row-matching query plan

Copy Code code as follows:

Buffers:shared hit=83494
-> Bitmap Heap Scan on Context C (cost=6923.33..11762.31 Rows=1 width=329) (actual time=17128.121..22031.783 rows=10858 Loops=1)
Recheck Cond: (tags @> ' {blah} ':: text[]) and (X_key = 1))
Filter: (key = any (' {15368196, (a lot more keys here)} ':: integer[])
Buffers:shared hit=50919


Postgres uses bitmaps to scan table C. When the primary key data volume is small, it can effectively use the index in memory to create a bitmap. If the bitmap is too large, the optimal query plan changes the way it is queried. In our query, because the primary key contains a large amount of data, the query uses the optimal (System-determined) way to retrieve the query candidate rows and immediately queries all data matching the primary key. Is that these ¨ put in memory ¨ and ¨ immediately query ¨ spend too much time (recheck Cond in the query plan).

Fortunately, only 30% of the data is imported into memory, so it's not as bad as reading from a hard drive. But it still has a very noticeable effect on performance. Remember, the query is very simple. This is a primary key query so there are not many clear ways to determine whether it has a dramatic redesign of the database or the application. Pgsql-performance Mailing list has given us a lot of help.

Solution

This is another reason we like to open source and like to help users. Tom Lane is one of the most prolific programmers in the open source code, and he advises us to try the following:

Copy Code code as follows:

SELECT C.key,
C.x_key,
C.tags,
X.name
From the context C
JOIN x
On c.x_key = X.key
WHERE C.key = any (VALUES (15368196),--11,000 the other keys--)
and C.x_key = 1
and c.tags @> array[e ' blah '];

Can you point out their differences by changing the array to values?

We use array[...] Enumerate all the keywords to query, but this deceives the query optimizer. But values (...) It allows the optimizer to fully use the keyword index. Just one line of code changes, and no semantic changes are generated.

Here's how the new query is written, and the difference lies in the third and 14th lines.

Copy Code code as follows:

Nested Loop (cost=168.22..2116.29 rows=148 width=362) (actual time=22.134..256.531 rows=10858)
Buffers:shared hit=44967
-> Index Scan using X_pkey on X (cost=0.00..8.27 Rows=1 width=37) (actual time=0.071..0.073 Rows=1 Loops=1)
Index Cond: (id = 1)
Buffers:shared hit=4
-> Nested Loop (cost=168.22..2106.54 rows=148 width=329) (actual time=22.060..242.406 rows=10858 Loops=1)
Buffers:shared hit=44963
-> hashaggregate (cost=168.22..170.22 rows=200 width=4) (actual time=21.529..32.820, rows=11215 Loops=1)
-> Values Scan on "*values*" (cost=0.00..140.19 rows=11215 width=4) (actual time=0.005..9.527 rows=11215)
-> Index Scan using Context_pkey on Context C (cost=0.00..9.67 Rows=1 width=329) (actual time=0.015..0.016 Rows=1 Loo ps=11215)
Index Cond: (C.key = "*values*". Column1)
Filter: ((c.tags @> ' {blah} ':: text[]) and (c.x_id = 1))
Buffers:shared hit=44963
Total runtime:263.639 ms

Query time from 22000ms to 200ms, only one line of code change efficiency increased by 100 times times.

New queries for use in production
A piece of code that is about to be released:




It makes the database look more beautiful and relaxed.






Third-party tools
Postgres slow query does not exist. But who is willing to be tortured by the 0.1% unfortunate few. To immediately verify the impact of the Modify query, you need to Datadog to help us determine if the modification is correct.

If you want to find out the impact of postgres query changes, it may take a few minutes to register for a free Datadog account.

English Original: 100x faster Postgres performance by changing 1 line

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.