PostgreSQL a process consumes too much CPU resources to fall down

Source: Internet
Author: User
Tags create index postgresql

Because it is a development phase, there are no parameters configured for Postgres, all using the default configuration at the time of installation,
It's not unusual to run in the past, but my CPU resource usage has suddenly risen in the last few days.
Review the process, found that there is a postgres process consumes more than 80% CPU, and high;
At first I thought it was a configuration that needed to be modified, but in fact, the default configuration was basically optimized, and it was the development phase, and the amount of data was not large.
Later, through analysis, concluded that the problem should be addressed from the following aspects to consider:
1:sql Query Aspects
Check whether the index of the data retrieval is established, and the fields that need to be searched as far as possible index, even the joint index;
Create an index, including expressions and partial indexes;
Replace multiple INSERT statements with the copy statement;
A transaction is composed of multiple SQL statements to reduce the cost of committing a transaction;
Use cluster when extracting multiple records from an index;
Use limit when extracting part of a record from a query result;
Use pre-compiled queries (Prepared query);
Use analyze to maintain accurate optimization statistics;
Regular use of VACUUM or pg_autovacuum
Delete indexes first when making large amounts of data changes (then rebuilding indexes)
2: Experience in the program
Check the program, whether the use of connection pool, if not used, as soon as possible to use it;
Continue to check the program, after the connection is used, whether to return to the connection pool;
3: Server parameter Configuration
Many of the settings in profile postgres.conf can affect performance,
Shared_buffers: This is the most important parameter, PostgreSQL deals with the kernel/disk through shared_buffers.
It should therefore be as large as possible to allow more data to be cached in the shared_buffers, usually set to 10% of the actual RAM is reasonable, such as 50000 (400M)
Work_mem: Before Pgsql 8.0 is called Sort_mem. When PostgreSQL performs a sort operation,
Depending on the size of the WORK_MEM, a large result set is split into several small and work_mem-sized temporary files.
Obviously the result of the split is to reduce the speed of the sorting. Therefore, increasing the WORK_MEM helps to improve the speed of sorting. Typically set to 2%-4% of actual RAM, depending on the size of the result set you want to sort, such as 81920 (80M)
Effective_cache_size: Is the maximum cache that PostgreSQL can use,
This number should be large enough for a standalone Pgsql server, such as 4G of memory, which can be set to 3.5G (437500)
MAINTENCE_WORK_MEM: The memory defined here is only used when create INDEX, vacuum, etc., so the frequency used is not high, but often these instructions consume more resources,
Therefore, the instructions should be quickly executed as soon as possible: give Maintence_work_mem large memory, such as 512M (524288)
Max_connections: Typically, the purpose of max_connections is to prevent max_connections * work_mem from exceeding the actual memory size.
For example, if the Work_mem is set to 2% of the actual memory, then in extreme cases, if there are 50 queries that have sort requirements, and all use 2% of the memory, it will result in swap generation and system performance will be greatly reduced.
Of course, if you have 4G of memory, the odds of having 50 of these large queries at the same time should be small. However, be clear about the relationship between Max_connections and Work_mem.
Explanations of the parameters are visible: http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html and http://www.varlena.com/ Varlena/generalbits/tidbits/perf.html.
4: Choice of hardware
Because most of the computer hardware is compatible, people tend to believe that all computer hardware quality is the same.
In fact, ECC RAM (with parity memory), SCSI (hard disk) and premium motherboards are more reliable and have better performance than some bargains.
PostgreSQL can run on almost any hardware, but if reliability and performance are important to your system, you need to look at your hardware configuration thoroughly.
Effects of computer hardware on performance Browse http://candle.pha.pa.us/main/writings/pgsql/hw_performance/index.html and/http www.powerpostgresql.com/PerfList/.
5: Why do I receive "Sorry, too many clients" message when I try to connect?
This means that you have reached the limit of the default number of 100 concurrent background processes,
You need to increase the number of postmaster background concurrent processing by modifying the Max_connections value in the postgresql.conf file, and restart postmaster after the modification.

PostgreSQL a process consumes too much CPU resources to fall down

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.