Обсуждение: CPU utilization vs. IO wait, shared buffers?

Поиск
Список
Период
Сортировка

CPU utilization vs. IO wait, shared buffers?

От
"Oliver Johnson"
Дата:
Hi everybody,

I am running a bake/load test and I am seeing sudden, daily shifts
from CPU utilization to IO wait.  The load harness has been running
for 3 weeks and should be putting a uniform load on the application.
The application processes data on a daily basis and a sawtooth CPU
pattern on the database is expected as more values are added
throughout the day and processing resets with the next day.  Each day,
I see the CPU utilization climb as expected until a shift occurs and
it spends the rest of the day primarily in IO wait.

Looking at pg_statio_user_tables, I can see that during the CPU
intense timeframe, most of the results come from the buffer cache
(hits).  During the IO wait, most of the results are being read in
(misses).  Examples from each timeframe (CPU/IO) are included below.
For each sample, I issued pg_stat_reset(), waited briefly, and then
queried pg_statio_user_tables.

*during CPU Intense timeframe*
db=# select * from pg_statio_user_tables;
 relid | schemaname |              relname              |
heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |
toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit

-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------
 16612 | public     | tablea                            |
1 |       1346782 |             1 |        55956 |               0 |
           0 |              0 |             0
 16619 | public     | tableb                            |
0 |           579 |             0 |         1158 |                 |
             |                |

*during IO WAIT timeframe*
db=# select * from pg_statio_user_tables;
 relid | schemaname |              relname              |
heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |
toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit

-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------
 16612 | public     | tablea                            |
244146 |           594 |          4885 |         3703 |
0 |              0 |              0 |             0
 16619 | public     | tableb                            |
418 |           589 |           432 |         1613 |                 |
               |                |



Another thing to note, we have VACUUM ANALYZE running on an hourly
interval and the switch from CPU to IO wait appears to always coincide
with a vacuum.

What might cause this shift?

I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this
did not appear to have an impact.

I also tried upping the work_mem from 1MB to 10MB, and this did not
appear to have an impact either.

Any ideas?  Thanks for your help!

Oliver


We're running Postgresql 8.2.9

Re: CPU utilization vs. IO wait, shared buffers?

От
Alan Hodgson
Дата:
On Thursday 30 October 2008, "Oliver Johnson" <oliverjjohnson@gmail.com>
wrote:
> Another thing to note, we have VACUUM ANALYZE running on an hourly
> interval and the switch from CPU to IO wait appears to always coincide
> with a vacuum.
>
> What might cause this shift?

The extra disk access caused by vacuum? That seems pretty obvious.

Use auto-vacuum. There's no reason to vacuum your entire database every hour
(doing so reads from disk the entirety of every table and index, and
generates some write activity).

--
Alan

Re: CPU utilization vs. IO wait, shared buffers?

От
"Scott Marlowe"
Дата:
On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson
<oliverjjohnson@gmail.com> wrote:

> Another thing to note, we have VACUUM ANALYZE running on an hourly
> interval and the switch from CPU to IO wait appears to always coincide
> with a vacuum.

Why are you not using autovacuum with appropriate wait parameters to
keep it out of your way?  Autovacuum tends to make pretty good
decisions and you can adjust the aggressiveness with which it kicks in
if you need to.

> What might cause this shift?
>
> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this
> did not appear to have an impact.

Do you mean shared_buffers?  It may well be that larger shared_buffers
aren't going to help if you're dealing with a largely random
transactional load.  that said, 1G shared_buffers is not that big
nowadays.  I'm assuming by your testing methods you're on a real db
server with several dozen gigs of ram...

> I also tried upping the work_mem from 1MB to 10MB, and this did not
> appear to have an impact either.

Look into upping your checkpoint_segments (64 or so is reasonable for
a large production server) and possibly increasing your
checkpoint_completion_target to something closer to 1.0 (0.7 to 0.8)
and see if that helps.

Re: CPU utilization vs. IO wait, shared buffers?

От
"Oliver Johnson"
Дата:
On Thu, Oct 30, 2008 at 4:41 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson
> <oliverjjohnson@gmail.com> wrote:
>
>> Another thing to note, we have VACUUM ANALYZE running on an hourly
>> interval and the switch from CPU to IO wait appears to always coincide
>> with a vacuum.
>
> Why are you not using autovacuum with appropriate wait parameters to
> keep it out of your way?  Autovacuum tends to make pretty good
> decisions and you can adjust the aggressiveness with which it kicks in
> if you need to.

Thanks for the quick feedback.  I struggled with autovacuum in a past
life and developed a favor for explicit table level vacuums.  Also,
vacuum'ing did not jump out to me as a culprit originally as there is
no significant impact (or indicators of duress) during the early day
vacuums.

You and Alan have brought up some good points, though.  I turned
autovacuum on and increased the checkpoint_segments.  I will let it
run over night and see how things look.

Thanks again.


>
>> What might cause this shift?
>>
>> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this
>> did not appear to have an impact.
>
> Do you mean shared_buffers?  It may well be that larger shared_buffers
> aren't going to help if you're dealing with a largely random
> transactional load.  that said, 1G shared_buffers is not that big
> nowadays.  I'm assuming by your testing methods you're on a real db
> server with several dozen gigs of ram...
>
>> I also tried upping the work_mem from 1MB to 10MB, and this did not
>> appear to have an impact either.
>
> Look into upping your checkpoint_segments (64 or so is reasonable for
> a large production server) and possibly increasing your
> checkpoint_completion_target to something closer to 1.0 (0.7 to 0.8)
> and see if that helps.
>

Re: CPU utilization vs. IO wait, shared buffers?

От
znmeb@cesmail.net
Дата:
Quoting Oliver Johnson <oliverjjohnson@gmail.com>:

> Hi everybody,
>
> I am running a bake/load test and I am seeing sudden, daily shifts
> from CPU utilization to IO wait.  The load harness has been running
> for 3 weeks and should be putting a uniform load on the application.
> The application processes data on a daily basis and a sawtooth CPU
> pattern on the database is expected as more values are added
> throughout the day and processing resets with the next day.  Each day,
> I see the CPU utilization climb as expected until a shift occurs and
> it spends the rest of the day primarily in IO wait.
>
> Looking at pg_statio_user_tables, I can see that during the CPU
> intense timeframe, most of the results come from the buffer cache
> (hits).  During the IO wait, most of the results are being read in
> (misses).  Examples from each timeframe (CPU/IO) are included below.
> For each sample, I issued pg_stat_reset(), waited briefly, and then
> queried pg_statio_user_tables.
>
> *during CPU Intense timeframe*
> db=# select * from pg_statio_user_tables;
>  relid | schemaname |              relname              |
> heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |
> toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit
>
-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------
>  16612 | public     | tablea                            |
> 1 |       1346782 |             1 |        55956 |               0 |
>            0 |              0 |             0
>  16619 | public     | tableb                            |
> 0 |           579 |             0 |         1158 |                 |
>              |                |
>
> *during IO WAIT timeframe*
> db=# select * from pg_statio_user_tables;
>  relid | schemaname |              relname              |
> heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit |
> toast_blks_read | toast_blks_hit | tidx_blks_read | tidx_blks_hit
>
-------+------------+-----------------------------------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------
>  16612 | public     | tablea                            |
> 244146 |           594 |          4885 |         3703 |
> 0 |              0 |              0 |             0
>  16619 | public     | tableb                            |
> 418 |           589 |           432 |         1613 |                 |
>                |                |
>
>
>
> Another thing to note, we have VACUUM ANALYZE running on an hourly
> interval and the switch from CPU to IO wait appears to always coincide
> with a vacuum.
>
> What might cause this shift?
>
> I have tried adjusting buffer_cache from 512 MB to 1024 MB, but this
> did not appear to have an impact.
>
> I also tried upping the work_mem from 1MB to 10MB, and this did not
> appear to have an impact either.
>
> Any ideas?  Thanks for your help!
>
> Oliver
>
>
> We're running Postgresql 8.2.9
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

First of all, stop me if you're not running Linux -- that's the only
OS I know. :) Second, if you're not running a fairly recent 2.6 kernel
(2.6.18/RHEL 5 or later), you should probably upgrade, because the
performance stats are better. 2.6.25 is better still.

Next, if you haven't already, install the "sysstat" package. My
recollection is that it does not install by default on most distros.
It should -- go beat up on the distributors. :)

Now you have "iostat" installed. That will give you detailed
information on both processor and I/O activity. Use the command

$ iostat -cdmtx 10 999999 | tee iostat.log

This will sample the processor(s), all the devices, and on 2.6.25 or
later kernels, all the *partitions*. This last is important if you
have things in different filesystems.

What you will probably see is samples where the I/O wait is high
correlated with high levels of read activity (reads per second and
read megabytes per second) and high device utilization. That means you
are reading data from disk and the processors are waiting for it. What
can you do about it?

1. Add RAM. This will let Linux put more stuff in page cache, making
it have to read less.
2. Experiment with the four I/O schedulers. You can change them at run
time (as "root").

I've put a little bit of this on line -- it's fairly detailed, and
it's not PostgreSQL-specific, but you can get an indication of the
concepts. By the way, I am working on some scripts that will actually
integrate this type of monitoring and analysis with PostgreSQL. What
they will do is load the raw Linux data into a PostgreSQL database and
provide analysis queries and other tools. But for now, see if this
makes any sense to you:

http://cougar.rubyforge.org/svn/trunk/procmodel/IO-Counters/beamer/handout.pdf