Re: Controlling Load Distributed Checkpoints

Поиск
Список
Период
Сортировка
От Heikki Linnakangas
Тема Re: Controlling Load Distributed Checkpoints
Дата
Msg-id 46691869.9070300@enterprisedb.com
обсуждение исходный текст
Ответ на Re: Controlling Load Distributed Checkpoints  (Greg Smith <gsmith@gregsmith.com>)
Ответы Re: Controlling Load Distributed Checkpoints  (Andrew Sullivan <ajs@crankycanuck.ca>)
Список pgsql-hackers
Greg Smith wrote:
> On Thu, 7 Jun 2007, Heikki Linnakangas wrote:
> 
>> So there's two extreme ways you can use LDC:
>> 1. Finish the checkpoint as soon as possible, without disturbing other 
>> activity too much
>> 2. Disturb other activity as little as possible, as long as the 
>> checkpoint finishes in a reasonable time.
>> Are both interesting use cases, or is it enough to cater for just one 
>> of them? I think 2 is easier to tune.
> 
> The motivation for the (1) case is that you've got a system that's 
> dirtying the buffer cache very fast in normal use, where even the 
> background writer is hard pressed to keep the buffer pool clean.  The 
> checkpoint is the most powerful and efficient way to clean up many dirty 
> buffers out of such a buffer cache in a short period of time so that 
> you're back to having room to work in again.  In that situation, since 
> there are many buffers to write out, you'll also be suffering greatly 
> from fsync pauses.  Being able to synchronize writes a little better 
> with the underlying OS to smooth those out is a huge help.

ISTM the bgwriter just isn't working hard enough in that scenario. 
Assuming we get the lru autotuning patch in 8.3, do you think there's 
still merit in using the checkpoints that way?

> I'm completely biased because of the workloads I've been dealing with 
> recently, but I consider (2) so much easier to tune for that it's barely 
> worth worrying about.  If your system is so underloaded that you can let 
> the checkpoints take their own sweet time, I'd ask if you have enough 
> going on that you're suffering very much from checkpoint performance 
> issues anyway.  I'm used to being in a situation where if you don't push 
> out checkpoint data as fast as physically possible, you end up fighting 
> with the client backends for write bandwidth once the LRU point moves 
> past where the checkpoint has written out to already.  I'm not sure how 
> much always running the LRU background writer will improve that situation.

I'd think it eliminates the problem. Assuming we keep the LRU cleaning 
running as usual, I don't see how writing faster during checkpoints 
could ever be beneficial for concurrent activity. The more you write, 
the less bandwidth there's available for others.

Doing the checkpoint as quickly as possible might be slightly better for 
average throughput, but that's a different matter.

> On every system I've ever played with Postgres write performance on, I 
> discovered that the memory-based parameters like dirty_background_ratio 
> were really driving write behavior, and I almost ignore the expire 
> timeout now.  Plotting the "Dirty:" value in /proc/meminfo as you're 
> running tests is extremely informative for figuring out what Linux is 
> really doing underneath the database writes.

Interesting. I haven't touched any of the kernel parameters yet in my 
tests. It seems we need to try different parameters and see how the 
dynamics change. But we must also keep in mind that average DBA doesn't 
change any settings, and might not even be able or allowed to. That 
means the defaults should work reasonably well without tweaking the OS 
settings.

> The influence of the congestion code is why I made the comment about 
> watching how long writes are taking to gauge how fast you can dump data 
> onto the disks.  When you're suffering from one of the congestion 
> mechanisms, the initial writes start blocking, even before the fsync. 
> That behavior is almost undocumented outside of the relevant kernel 
> source code.

Yeah, that's controlled by dirty_ratio, if I've understood the 
parameters correctly. If we spread out the writes enough, we shouldn't 
hit that limit or congestion. That's the point of the patch.

Do you have time / resources to do testing? You've clearly spent a lot 
of time on this, and I'd be very interested to see some actual numbers 
from your tests with various settings.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Zeugswetter Andreas ADI SD"
Дата:
Сообщение: Re: Autovacuum launcher doesn't notice death of postmaster immediately
Следующее
От: ohp@pyrenet.fr
Дата:
Сообщение: Re: little PITR annoyance