Modifying a row of SQL code improves performance by 100 times _ MySQL

Source: Internet
Author: User
Tags datadog
Modified a line of unnoticeable code in PostgreSQL to (ANY (ARRAY [...]) change to ANY (VALUES (...))), the result query time changes from 20 s to 0.2 s. At first, we learned to use EXPLANANALYZE to optimize the code. Later on, the Postgres community has become a good helper for us to learn and improve, and our performance has been greatly improved. This is because Datadog specifically modifies a line of non-obvious code in PostgreSQL, putting (ANY (ARRAY [...]) change to ANY (VALUES (...))), the result query time changes from 20 s to 0.2 s. At first we learned to use explan analyze to optimize the code. Later on, the Postgres community has become a good helper for us to learn and improve, and our performance has been greatly improved.

  Cause

Datadog provides monitoring services for IT and development teams. Earlier this week, a performance problem faced by many of our databases was that a large number of key queries were performed on a small table. 99.9% of these queries are efficient and flexible. In a very small number of instances, it takes 20 s to query some performance indicator tags. This means that the user needs to spend so long in front of the browser to wait for the graphic editor to respond. Even 0.1%, this user experience is obviously terrible. we have monitored this and explored why the speed is so slow.

  Query and plan

The results are shocking, and the culprit is the following simple query:

SELECT c.key,       c.x_key,       c.tags,       x.name FROM context c JOIN x   ON c.x_key = x.keyWHERE c.key = ANY (ARRAY[15368196, -- 11,000 other keys --)])  AND c.x_key = 1  AND c.tags @> ARRAY[E'blah'];

Table X has thousands of rows of data, and Table C has 15 million rows of data. the "key" columns of both tables contain the appropriate index primary key. Simply put, it is a simple primary key query. But interestingly, as the number of records in the key column increases, for example, in the 11000 row, we add the EXPLAIN (ANALYZE, BUFFERS) to check whether the value of the key column matches the value in the array:

Nested Loop  (cost=6923.33..11770.59 rows=1 width=362) (actual time=17128.188..22109.283 rows=10858 loops=1)  Buffers: shared hit=83494  ->  Bitmap Heap Scan on context c  (cost=6923.33..11762.31 rows=1 width=329) (actual time=17128.121..22031.783 rows=10858 loops=1)        Recheck Cond: ((tags @> '{blah}'::text[]) AND (x_key = 1))        Filter: (key = ANY ('{15368196,(a lot more keys here)}'::integer[]))        Buffers: shared hit=50919        ->  BitmapAnd  (cost=6923.33..6923.33 rows=269 width=0) (actual time=132.910..132.910 rows=0 loops=1)              Buffers: shared hit=1342              ->  Bitmap Index Scan on context_tags_idx  (cost=0.00..1149.61 rows=15891 width=0) (actual time=64.614..64.614 rows=264777 loops=1)                    Index Cond: (tags @> '{blah}'::text[])                    Buffers: shared hit=401              ->  Bitmap Index Scan on context_x_id_source_type_id_idx  (cost=0.00..5773.47 rows=268667 width=0) (actual time=54.648..54.648 rows=267659 loops=1)                    Index Cond: (x_id = 1)                    Buffers: shared hit=941  ->  Index Scan using x_pkey on x  (cost=0.00..8.27 rows=1 width=37) (actual time=0.003..0.004 rows=1 loops=10858)        Index Cond: (x.key = 1)        Buffers: shared hit=32575Total runtime: 22117.417 ms

This query takes 22 s. we can intuitively understand this 22s, most of which are between IPVs and OS, disk I/O takes a very small amount of time.

At the lowest level, these queries look like the peak CPU usage. Here, we mainly want to confirm a key point: the database does not wait for the disk to read data, but performs sorting, hash, and row comparison.

Use S to obtain the number of rows closest to the peak value.

Obviously, our queries are executed in an orderly manner in most cases.

  Postgres performance problem: bitmap heap scanning

The rows_fetched metric is consistent with the following plan:

Buffers: shared hit=83494  ->  Bitmap Heap Scan on context c  (cost=6923.33..11762.31 rows=1 width=329) (actual time=17128.121..22031.783 rows=10858 loops=1)        Recheck Cond: ((tags @> '{blah}'::text[]) AND (x_key = 1))        Filter: (key = ANY ('{15368196,(a lot more keys here)}'::integer[]))        Buffers: shared hit=50919

Postgres uses Bitmap Heap Scan to read data in Table C. When the number of keywords is small, it can use indexes to build bitmaps very efficiently in the memory. If the bitmap is too large, the query optimizer will change the data search method. In this case, we need to check a large number of keywords, so it uses a very similar method to check candidate rows and separately check each row that matches x_key and tag. All of these "loading in memory" and "checking each line" takes a lot of time.

Fortunately, 30% of our tables are loaded in RAM, so it won't behave too bad when checking rows from the disk. But in terms of performance, it still has a very obvious impact. The query is too simple. this is a very simple key search, so there is no obvious reconstruction of the database or application. it is difficult to find some simple ways to solve this problem. Finally, we use PGSQL-Performance email to ask for help from the community.

  Solution

Open source has helped us. Tom Lane, which is experienced and has a lot of contributions to code, allows us to try this:

SELECT c.key,       c.x_key,       c.tags,       x.name FROM context c JOIN x   ON c.x_key = x.keyWHERE c.key = ANY (VALUES (15368196), -- 11,000 other keys --)  AND c.x_key = 1  AND c.tags @> ARRAY[E'blah'];

Can you find any difference? Replace ARRAY with VALUES.

We use ARRAY [...] to list all the keywords for query, but it spoofs the query optimizer. Values (...) allows the optimizer to make full use of the keyword index. Only a line of code changes without any semantic changes.

The following describes how to write a new query statement. The difference lies in rows 3 and 14th.

Nested Loop  (cost=168.22..2116.29 rows=148 width=362) (actual time=22.134..256.531 rows=10858 loops=1)  Buffers: shared hit=44967  ->  Index Scan using x_pkey on x  (cost=0.00..8.27 rows=1 width=37) (actual time=0.071..0.073 rows=1 loops=1)        Index Cond: (id = 1)        Buffers: shared hit=4  ->  Nested Loop  (cost=168.22..2106.54 rows=148 width=329) (actual time=22.060..242.406 rows=10858 loops=1)        Buffers: shared hit=44963        ->  HashAggregate  (cost=168.22..170.22 rows=200 width=4) (actual time=21.529..32.820 rows=11215 loops=1)              ->  Values Scan on "*VALUES*"  (cost=0.00..140.19 rows=11215 width=4) (actual time=0.005..9.527 rows=11215 loops=1)        ->  Index Scan using context_pkey on context c  (cost=0.00..9.67 rows=1 width=329) (actual time=0.015..0.016 rows=1 loops=11215)              Index Cond: (c.key = "*VALUES*".column1)              Filter: ((c.tags @> '{blah}'::text[]) AND (c.x_id = 1))              Buffers: shared hit=44963Total runtime: 263.639 ms

From 200 ms to 100 ms, only one line of code was modified, and the speed was improved by more than times.

  New query in the product

Code after deployment:

Database looks more beautiful

Slow query by S s will never go. But who would want to be tortured again because of the 0.1% bad luck? We use Datadog to verify that the modification is correct, and it can perform real-time verification. If you want to see the effects of Postgres query speed, try Datadog.

Source: Datadog

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.