Re: Optimizing Database High CPU

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: Optimizing Database High CPU
Дата
Msg-id CAMkU=1zuUcrx9z8_SmPnOJBX4v=wV2q9qoYFjJP0DcjruxO4Hg@mail.gmail.com
обсуждение исходный текст
Ответ на Optimizing Database High CPU  (Scottix <scottix@gmail.com>)
Ответы Re: Optimizing Database High CPU  (Michael Lewis <mlewis@entrata.com>)
Список pgsql-general
On Wed, Feb 27, 2019 at 2:07 PM Scottix <scottix@gmail.com> wrote:
Hi we are running a Postgresql Database 9.4.18 and we are noticing a
high CPU usage. Nothing is critical at the moment but if we were to
scale up more of what we are doing, I feel we are going to run into
issues.

9.4 is old. A lot of improvements have been made sense then, including around contention issues.  Such as replacing some use of spinlocks with use of atomic operations instead.

You would be better off putting time into upgrading, rather than putting time into worrying about performance issues on something that will soon be end-of-life anyway.


It is a 2 x 6 core machine, 128GB ram, Raid 10 HDD

The iostat metrics for the HDD look minimal < 10% util
Available memory seems to be good.

The CPU utilization is what bothering me
user  5-7%
sys    50-70% - seems high
wa    <0.5%

So trying to troubleshoot possible high cpu:
Number of concurrent connections averages 50 to 100 - seems high
although we max at 200.

If those 50-100 connections are all active at once, yes, that is high.  They can easily spend more time fighting each other over LWLocks, spinlocks, or cachelines rather than doing useful work.  This can be exacerbated when you have multiple sockets rather than all cores in a single socket.  And these problems are likely to present as high Sys times.

Perhaps you can put up a connection pooler which will allow 100 connections to all think they are connected at once, but forces only 12 or so to actually be active at one time, making the others transparently queue.

 
No long running queries
Streaming replication to backup server
High update tables - we have about 4 tables that have a high volume of updates

Is it a few transactions updating a lot of rows each, or many transactions updating a few rows each?
 
High update rate is what I am thinking is causing the issue and I
found possibly setting fillfactor to a lower default 

I don't think that that is promising.  I wouldn't expect high Sys time if this was the problem.  And with a high rate of updates (unless each update is essentially to every row in the table), I would expect the table to reach a steady state of tuple density.  Basically a too-high fillfactor will fix itself naturally over time, it might be just take a while to do it.  If your system has been running for a while, it has probably already arrived at a steady state.  You can use the extension pg_freespacemap to so how the freespace is spread around in your table blocks.
 
Are there any statistics I could run to see if a setting change would help.

I'd probably start with pg_stat_activity table's "state" column to see how many of your connections are active at once, and its columns "wait_event_type" and "wait_event" to see what they think they are waiting on (but those last columns aren't present until 9.6).

Cheers,

Jeff

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: why not using a mountpoint as PGDATA?
Следующее
От: Julie Nishimura
Дата:
Сообщение: automated refresh of dev from prod