Re: Reducing relation locking overhead

Поиск
Список
Период
Сортировка
От Gregory Maxwell
Тема Re: Reducing relation locking overhead
Дата
Msg-id e692861c0512021301m69ebb3e2sceff8ad174c37dc0@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Reducing relation locking overhead  (Greg Stark <gsstark@mit.edu>)
Ответы Re: Reducing relation locking overhead  (Alvaro Herrera <alvherre@commandprompt.com>)
Список pgsql-hackers
On 02 Dec 2005 15:25:58 -0500, Greg Stark <gsstark@mit.edu> wrote:
> I suspect this comes out of a very different storage model from Postgres's.
>
> Postgres would have no trouble building an index of the existing data using
> only shared locks. The problem is that any newly inserted (or updated) records
> could be missing from such an index.
>
> To do it you would then have to gather up all those newly inserted records.
> And of course while you're doing that new records could be inserted. And so
> on. There's no guarantee it would ever finish, though I suppose you could
> detect the situation if the size of the new batch wasn't converging to 0 and
> throw an error.

After you're mostly caught up, change locking behavior to block
further updates while the final catchup happens. This could be driven
by a hurestic that says make up to N attempts to catch up without
blocking, after that just take a lock and finish the job. Presumably
the catchup would be short compared to the rest of the work.

Are their enviroments which could not tolerate even this minimal hit?
Probably, which leaves the choice of telling them 'don't reindex then'
or providingaA knob which would tell it to never block (would just try
N times and then give up, failing the reindex).


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: generalizing the planner knobs
Следующее
От: Gregory Maxwell
Дата:
Сообщение: Re: generalizing the planner knobs