Re: Load distributed checkpoint V4

Поиск
Список
Период
Сортировка
От Heikki Linnakangas
Тема Re: Load distributed checkpoint V4
Дата
Msg-id 462C8432.5020101@enterprisedb.com
обсуждение исходный текст
Ответ на Re: Load distributed checkpoint V4  (ITAGAKI Takahiro <itagaki.takahiro@oss.ntt.co.jp>)
Список pgsql-hackers
ITAGAKI Takahiro wrote:
> Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>> We might want to call GetCheckpointProgress something 
>> else, though. It doesn't return the amount of progress made, but rather 
>> the amount of progress we should've made up to that point or we're in 
>> danger of not completing the checkpoint in time.
> 
> GetCheckpointProgress might be a bad name; It returns the progress we should
> have done, not at that time. How about GetCheckpointTargetProgress?

Better. A bit long though. Not that I have any better suggestions ;-)

>> In the sync phase, we sleep between each fsync until enough 
>> time/segments have passed, assuming that the time to fsync is 
>> proportional to the file length. I'm not sure that's a very good 
>> assumption. We might have one huge files with only very little changed 
>> data, for example a logging table that is just occasionaly appended to. 
>> If we begin by fsyncing that, it'll take a very short time to finish, 
>> and we'll then sleep for a long time. If we then have another large file 
>> to fsync, but that one has all pages dirty, we risk running out of time 
>> because of the unnecessarily long sleep. The segmentation of relations 
>> limits the risk of that, though, by limiting the max. file size, and I 
>> don't really have any better suggestions.
> 
> It is difficult to estimate fsync costs. We need additonal statistics to
> do it. For example, if we record the number of write() for each segment,
> we might use the value as the number of dirty pages in segments. We don't
> have per-file write statistics now, but if we will have those information,
> we can use them to control checkpoints more cleverly.

It's probably not worth it to be too clever with that. Even if we 
recorded the number of writes we made, we still wouldn't know how many 
of them haven't been flushed to disk yet.

I guess we're fine if we do just avoid excessive waiting per the 
discussion in the next paragraph, and use a reasonable safety margin in 
the default values.

>> Should we try doing something similar for the sync phase? If there's 
>> only 2 small files to fsync, there's no point sleeping for 5 minutes 
>> between them just to use up the checkpoint_sync_percent budget.
> 
> Hmmm... if we add a new parameter like kernel_write_throughput [kB/s] and
> clamp the maximum sleeping to size-of-segment / kernel_write_throuput (*1), 
> we can avoid unnecessary sleeping in fsync phase. Do we want to have such
> a new parameter? I think we have many and many guc variables even now.

How about using the same parameter that controls the minimum write speed 
of the write-phase (the patch used bgwriter_all_maxpages, but I 
suggested renaming it)?

> I don't want to add new parameters any more if possible...

Agreed.

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dave Page
Дата:
Сообщение: Re: [Fwd: PGBuildfarm member narwhal Branch HEAD Statuschanged from OK to InstallCheck failure]
Следующее
От: "Simon Riggs"
Дата:
Сообщение: Re: [PATCH] A crash and subsequent recovery of themaster can cause the slave to get out-of-sync