Re: Speed up transaction completion faster after many relations areaccessed in a transaction

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Speed up transaction completion faster after many relations areaccessed in a transaction
Дата
Msg-id CAKJS1f8wHjmu_tALMZpqOktKFsAsKneS6aOvkVRV2ezcvYBCSw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Speed up transaction completion faster after many relations are accessed in a transaction  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Speed up transaction completion faster after many relations areaccessed in a transaction  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Re: Speed up transaction completion faster after many relations areaccessed in a transaction  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Re: Speed up transaction completion faster after many relations are accessed in a transaction  (David Rowley <dgrowleyml@gmail.com>)
Список pgsql-hackers
On Thu, 25 Jul 2019 at 05:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> On the whole, I don't especially like this approach, because of the
> confusion between peak lock count and end-of-xact lock count.  That
> seems way too likely to cause problems.

Thanks for having a look at this.  I've not addressed the points
you've mentioned due to what you mention above.  The only way I can
think of so far to resolve that would be to add something to track
peak lock usage.  The best I can think of to do that, short of adding
something to dynahash.c is to check how many locks are held each time
we obtain a lock, then if that count is higher than the previous time
we checked, then update the maximum locks held, (probably a global
variable).   That seems pretty horrible to me and adds overhead each
time we obtain a lock, which is a pretty performance-critical path.

I've not tested what Andres mentioned about simplehash instead of
dynahash yet. I did a quick scan of simplehash and it looked like
SH_START_ITERATE would suffer the same problems as dynahash's
hash_seq_search(), albeit, perhaps with some more efficient memory
lookups. i.e it still has to skip over empty buckets, which might be
costly in a bloated table.

For now, I'm out of ideas. If anyone else feels like suggesting
something of picking this up, feel free.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: POC: Cleaning up orphaned files using undo logs
Следующее
От: Etsuro Fujita
Дата:
Сообщение: Re: progress report for ANALYZE