Обсуждение: long running commits

Поиск
Список
Период
Сортировка

long running commits

От
"Vaughn, Adam (IMS)"
Дата:
In the past 2 days we have been experiencing problems on our production server with very long running commit statements
whichseem to cause the entire instance to become blocked and non-responsive. These are the log messages for the 2
commitstatements (these occurred inside of a 1 hour block) 

duration: 945599.193 ms  execute S_2: COMMIT
duration: 967507.896 ms  execute S_2: COMMIT

Can anyone provide some assistance in how we should diagnose this problem (tools) and some possible solutions for
solvingit? Thanks in advance 

Some relevant information about our setup:

PostgreSQL 8.3.9 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.3
Sun Fire X4150, single quad core, 32 GB RAM
Suse Enterprise Linux 9 SP #4
Linux version 2.6.5-7.308-smp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 10 11:36:40 UTC 2007
NetApp FAS3170, RAID_DP on a 13 disk aggregate.  The disks are 450 GB FC 15K RPM.  The filer is busy, but not overly
so,several other projects are pushing more reads/writes than this instance. 
Dedicated storage network on a non-blocking wire speed Extreme switch.  Gigabit connectivity for both piranha and the
NetApp3170.  None of the Ethernet links are exceeding 60% utilization. 

Our non-default configuration parameters:

bgwriter_delay                  100
bgwriter_lru_maxpages         100
bgwriter_lru_multiplier         2.2
checkpoint_completion_target    0.9
checkpoint_segments             60
checkpoint_timeout              900
checkpoint_warning              300
cpu_index_tuple_cost            0.0025
cpu_operator_cost                     0.0008
cpu_tuple_cost                0.005
random_page_cost                      2
shared_buffers                6GB
temp_buffers                    48 MB
wal_buffers                     16 MB
wal_writer_delay                      200ms
work_mem                              48 MB

Information in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are
notthe addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or
copyingof this communication is strictly prohibited. If you have received this e-mail in error, please notify the
senderof the error. 

Re: long running commits

От
"Kevin Grittner"
Дата:
"Vaughn, Adam (IMS)" <VaughnA@imsweb.com> wrote:

> In the past 2 days we have been experiencing problems on our
> production server with very long running commit statements which
> seem to cause the entire instance to become blocked and
> non-responsive. These are the log messages for the 2 commit
> statements (these occurred inside of a 1 hour block)
>
> duration: 945599.193 ms  execute S_2: COMMIT
> duration: 967507.896 ms  execute S_2: COMMIT
>
> Can anyone provide some assistance in how we should diagnose this
> problem (tools) and some possible solutions for solving it?

Look for prior posts by Greg Smith on this topic.  (Or better yet,
buy his book.)  I don't believe anyone has a better handle on this
issue than he does.

That said, if you want emergency relief before you hear from him,
and a safer place from which to start tuning toward your optimal
settings based on the process he describes, I would make these
changes:

> bgwriter_delay                  100
> bgwriter_lru_maxpages         100
> bgwriter_lru_multiplier         2.2

> shared_buffers                6GB

bgwriter_delay                  200
bgwriter_lru_maxpages         1000
bgwriter_lru_multiplier         4

shared_buffers                512MB

These settings may not give you quite the same performance between
lockups, but the lockups should be less frequent and severe, and may
disappear entirely.  You may not want to stay here, but it's
probably a relatively safe place from which to start the process of
incremental adjustments to find your best settings.

The overall idea of these settings is to push more of the writes
toward the disk soon enough to prevent a debilitating glut at fsync
time.  We're getting better at dealing with this in each release, so
you may want to consider moving forward from 8.3 to help with it,
too.

-Kevin

Re: long running commits

От
"Kevin Grittner"
Дата:
> "Vaughn, Adam (IMS)" <VaughnA@imsweb.com> wrote:
>
> I made all of the changes you mentioned except for the
> shared_buffers (which will require a downtime I have set for
> tonight). I do have another question though, why did you pick 512
> MB for the new setting of shared_buffers? Everything I've ever
> read says that 25% of available RAM is a conservative value for
> shared_buffers.

Well, in general 25% may be the best for overall *throughput*, but
it can often lead to latency spikes, so it depends on what you care
about.  The curve of throughput against shared_buffers has gotten
pretty close to horizontal by around 1GB in a lot of my tests.  It
doesn't, after all, reduce the size of your cache; it affects how
much of the cache access requires an OS call versus staying within
PostgreSQL functions.

> Also, we had another one of these instances earlier today. During
> the 23 minute commit a single CPU was at 98% and it looked like
> all writes were backed up waiting for the commit to finalize.

What other symptoms did you notice?  Did the context switch rate
jump?  If I remember correctly you are going over the network to
your persistent storage -- did you see anything anomalous on the
network around that time?

Greg Smith has really turned the tuning of this into a science, with
incremental adjustments and some specific monitoring techniques.  I
strongly recommend you look for that on the list archives, in
recordings or slides from his presentations on the topice, or his
recent book.

> During the time our writing never got above 25 MB/s (far less than
> we can handle).

At that rate it should have been able to write your 6GB shared
buffers in about 4.1 minutes.  What did the disk writes *average*
during the incident?  6GB in 23 minutes would be 4.5MB per second.
If your average falls anywhere near that number, you have evidence
that the buffers were mostly dirty for some reason and the
checkpoint glutted on writing them.

> Is it possible that we're missing an index somewhere or there's
> something else going on?

A missing index would probably show high disk reads from sequential
passing large tables.  A context switch storm or network problems
seem like the two most likely causes to me, but it would pay to
monitor anything you can to help pin down the bottleneck next time
this happens.

-Kevin

Re: long running commits

От
Robert Treat
Дата:
On Wed, Mar 2, 2011 at 2:04 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
>> "Vaughn, Adam (IMS)" <VaughnA@imsweb.com> wrote:
>>
>> I made all of the changes you mentioned except for the
>> shared_buffers (which will require a downtime I have set for
>> tonight). I do have another question though, why did you pick 512
>> MB for the new setting of shared_buffers? Everything I've ever
>> read says that 25% of available RAM is a conservative value for
>> shared_buffers.
>
> Well, in general 25% may be the best for overall *throughput*, but
> it can often lead to latency spikes, so it depends on what you care
> about.  The curve of throughput against shared_buffers has gotten
> pretty close to horizontal by around 1GB in a lot of my tests.

Yeah, it's worth pointing out that either you (the OP) are reading the
wrong stuff, or interpreting it wrong. 25% is usually where people
will tell you to start, and then tune it *down* (change, measure,
asses, repeat). Won't work for every workload, but in general that's
the way to go.

Robert Treat
play: xzilla.net
work: omniti.com
hiring: l42.org/Lg

Re: long running commits

От
"Joshua D. Drake"
Дата:
On Wed, 2011-03-02 at 23:44 -0500, Robert Treat wrote:

> Yeah, it's worth pointing out that either you (the OP) are reading the
> wrong stuff, or interpreting it wrong. 25% is usually where people
> will tell you to start, and then tune it *down* (change, measure,
> asses, repeat).

assess robert... not asses. We have enough of those, myself included :P


>  Won't work for every workload, but in general that's
> the way to go.
>
> Robert Treat
> play: xzilla.net
> work: omniti.com
> hiring: l42.org/Lg
>

--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


Re: long running commits

От
"Vaughn, Adam (IMS)"
Дата:
Thanks for the suggestions. I made all of the changes you mentioned except for the shared_buffers (which will require a
downtimeI have set for tonight). I do have another question though, why did you pick 512 MB for the new setting of
shared_buffers?Everything I've ever read says that 25% of available RAM is a conservative value for shared_buffers. 

Also, we had another one of these instances earlier today. During the 23 minute commit a single CPU was at 98% and it
lookedlike all writes were backed up waiting for the commit to finalize. During the time our writing never got above 25
MB/s(far less than we can handle). Is it possible that we're missing an index somewhere or there's something else going
on?

Thanks again in advance

-----Original Message-----
From: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Sent: Friday, February 25, 2011 5:45 PM
To: Vaughn, Adam (IMS); 'pgsql-admin@postgresql.org'
Cc: Depuy, Scott (IMS); Meagher, Kevin (IMS)
Subject: Re: [ADMIN] long running commits

"Vaughn, Adam (IMS)" <VaughnA@imsweb.com> wrote:

> In the past 2 days we have been experiencing problems on our
> production server with very long running commit statements which
> seem to cause the entire instance to become blocked and
> non-responsive. These are the log messages for the 2 commit
> statements (these occurred inside of a 1 hour block)
>
> duration: 945599.193 ms  execute S_2: COMMIT
> duration: 967507.896 ms  execute S_2: COMMIT
>
> Can anyone provide some assistance in how we should diagnose this
> problem (tools) and some possible solutions for solving it?

Look for prior posts by Greg Smith on this topic.  (Or better yet,
buy his book.)  I don't believe anyone has a better handle on this
issue than he does.

That said, if you want emergency relief before you hear from him,
and a safer place from which to start tuning toward your optimal
settings based on the process he describes, I would make these
changes:

> bgwriter_delay                  100
> bgwriter_lru_maxpages         100
> bgwriter_lru_multiplier         2.2

> shared_buffers                6GB

bgwriter_delay                  200
bgwriter_lru_maxpages         1000
bgwriter_lru_multiplier         4

shared_buffers                512MB

These settings may not give you quite the same performance between
lockups, but the lockups should be less frequent and severe, and may
disappear entirely.  You may not want to stay here, but it's
probably a relatively safe place from which to start the process of
incremental adjustments to find your best settings.

The overall idea of these settings is to push more of the writes
toward the disk soon enough to prevent a debilitating glut at fsync
time.  We're getting better at dealing with this in each release, so
you may want to consider moving forward from 8.3 to help with it,
too.

-Kevin

Information in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are
notthe addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or
copyingof this communication is strictly prohibited. If you have received this e-mail in error, please notify the
senderof the error. 

Re: long running commits

От
Kenneth Marshall
Дата:
On Wed, Mar 02, 2011 at 01:10:37PM -0500, Vaughn, Adam (IMS) wrote:
> Thanks for the suggestions. I made all of the changes you mentioned except for the shared_buffers (which will require
adowntime I have set for tonight). I do have another question though, why did you pick 512 MB for the new setting of
shared_buffers?Everything I've ever read says that 25% of available RAM is a conservative value for shared_buffers. 
>
> Also, we had another one of these instances earlier today. During the 23 minute commit a single CPU was at 98% and it
lookedlike all writes were backed up waiting for the commit to finalize. During the time our writing never got above 25
MB/s(far less than we can handle). Is it possible that we're missing an index somewhere or there's something else going
on?
>
> Thanks again in advance
>

Hi Adam,

Having recently been benchmarking NFS filesystem performance with
random I/O streams, I think that you do not have the I/O performance
from your filer that you think you do. A 13-disk RAID-DP gives you
at best 11 spindles for random I/O and a single spindle can give a
best 5 MB/sec write throughput for an aggregate throughput of 55MB/sec
if you were the only user of the filer. It is not unreasonable to
assume that you only have 50% of the total throughput available if
it is in use as you stated in your original message. The changes
that have been suggested should allow you to keep the write needs
below your apparent 27MB/sec cap and reduce or eliminate the pauses.

Cheers,
Ken