PostgreSQL performance optimization considerations caused by connection to ipvs that consumes CPU resources in particular

Source: Internet
Author: User

Because it is in the development stage, the Postgres parameter is not configured, and the default configuration during installation is used,
In the past, nothing was abnormal, but my CPU usage suddenly increased a few days ago.
Check the process and find that the CPU usage of one ipvs process is above 80%, and the process remains high;

At first, I thought it was the configuration that needs to be modified, but in fact, the default configuration is basically optimized, and it is the development stage, and the data size is not large.
Later, through analysis, we concluded that we should consider the following aspects to solve the problem one by one:

1. SQL query
Check whether the index for data retrieval is created. Try to create an index or even a joint index for all fields to be searched;
Create an index, including expressions and partial indexes;
Replace multiple insert statements with the copy statement;
Combine multiple SQL statements into one transaction to reduce the overhead of transaction commit;
Cluster is used to extract multiple records from an index;
Use Limit When retrieving some records from a query result;
Prepared Query );
Use analyze to maintain accurate optimization statistics;
Regular use of vacuum or pg_autovacuum
Delete the index (then re-create the index) When making a large amount of data changes)

2. Procedural experience
Check whether the connection pool is used. If not, use it as soon as possible;
Check the program again. After the connection is used, check whether the connection pool is returned;

3. Server parameter configuration
Many configurations in the configuration file S. conf will affect the performance,
Shared_buffers: This is the most important parameter. PostgreSQL uses shared_buffers to deal with the kernel/disk.
Therefore, we should try to make more data cached in shared_buffers. It is generally reasonable to set 10% of actual Ram, such as 50000 (400 m)
Work_mem: sort_mem before pgsql 8.0. When PostgreSQL performs sorting,
Based on the size of work_mem, it determines whether to split a large result set into several small temporary files with few sizes checked by work_mem.
Obviously, the splitting result reduces the sorting speed. Therefore, adding work_mem helps increase the sorting speed. Usually set to 2%-4% of the actual Ram, depending on the size of the result set to be sorted, such as 81920 (80 m)
Effective_cache_size: The maximum cache that PostgreSQL can use,
This number should be large enough for independent pgsql servers, such as 4 GB memory, which can be set to 3.5 GB (437500)
Maintence_work_mem: the memory defined here is only used in create index, vacuum, and so on. Therefore, the usage frequency is not high, but these commands consume a lot of resources,
Therefore, the execution of these commands should be completed as soon as possible: large memory size for maintence_work_mem, such as 512 M (524288)
Max_connections: Generally, max_connections aims to prevent max_connections * work_mem from exceeding the actual memory size.
For example, if you set work_mem to 2% of the actual memory size, in extreme cases, if 50 queries have sorting requirements, and 2% of the memory is used, this will lead to the generation of swap, and the system performance will be greatly reduced.
Of course, if there is 4 GB memory, the probability of 50 such large queries at the same time should be very small. However, you need to understand the relationship between max_connections and work_mem.
See the http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html and http://www.varlena.com/varlena/generalbits/tidbits/perf.html.

4. Hardware Selection
Since most computer hardware is compatible, people tend to believe that the quality of all computer hardware is the same.
Actually not. ecc ram (memory with parity check), SCSI (hard disk) and high-quality motherboard are more reliable and have better performance than some cheap products.
PostgreSQL can run on almost any hardware, but if reliability and performance are important to your system, you need to thoroughly study your hardware configurations.
The impact of computer hardware on performance can be viewed in http://candle.pha.pa.us/main/writings/pgsql/hw_performance/index.html and http://www.power? SQL .com/perflist /.

5. Why do I receive the message "sorry, too handle clients" when trying to connect?
This indicates that you have reached the limit of 100 concurrent background processes by default,
You need to modify the max_connections value in the PostgreSQL. conf file to increase the number of concurrent Postmaster background processing. After the modification, You need to restart postmaster.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.