Re: CLOG extension

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Re: CLOG extension
Дата
Msg-id CA+U5nMKa-jXoz=OiAMLRfgA5Sv6EJ6eALw-NY7pRdjYnGhMGvg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: CLOG extension  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: CLOG extension  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Thu, May 3, 2012 at 7:50 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, May 3, 2012 at 1:27 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> Why not switch to 1 WAL record per file, rather than 1 per page. (32
>> pages, IIRC).
>>
>> We can then have the whole new file written as zeroes by a background
>> process, which needn't do that while holding the XidGenLock.
>
> I thought about doing a single record covering a larger number of
> pages, but that would be an even bigger hit if it were ever to occur
> in the foreground path, so you'd want to be very sure that the
> background process was going to absorb all the work.  And if the
> background process is going to absorb all the work, then I'm not sure
> it matters very much whether we emit one xlog record or 32.  After all
> it's pretty low volume compared to all the other xlog traffic.  Maybe
> there's some room for optimization here, but it doesn't seem like the
> first thing to pursue.
>
> Doing it a background process, though, may make sense.  What I'm a
> little worried about is that - on a busy system - we've only got about
> 2 seconds to complete each CLOG extension, and we must do an fsync in
> order to get there.  And the fsync can easily take a good chunk of (or
> even more than) that two seconds.  So it's possible that saddling the
> bgwriter with this responsibility would be putting too many eggs in
> one basket.  We might find that under the high-load scenarios where
> this is supposed to help, bgwriter is already too busy doing other
> things, and it doesn't get around to extending CLOG quickly enough.
> Or, conversely, we might find that it does get around to extending
> CLOG quickly enough, but consequently fails to carry out its regular
> duties.  We could of course add a NEW background process just for this
> purpose, but it'd be nicer if we didn't have to go that far.

Your two paragraphs have roughly opposite arguments...

Doing it every 32 pages would give you 30 seconds to complete the
fsync, if you kicked it off when half way through the previous file -
at current maximum rates. So there is utility in doing it in larger
chunks.

If it is too slow, we would just wait for sync like we do now.

I think we need another background process since we have both cleaning
and pre-allocating tasks to perform.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: CLOG extension
Следующее
От: Magnus Hagander
Дата:
Сообщение: Re: "unexpected EOF" messages