Re: Speed up transaction completion faster after many relations are accessed in a transaction

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Speed up transaction completion faster after many relations are accessed in a transaction
Дата
Msg-id 18820.1554646821@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Speed up transaction completion faster after many relations areaccessed in a transaction  (David Rowley <david.rowley@2ndquadrant.com>)
Ответы Re: Speed up transaction completion faster after many relations areaccessed in a transaction  (David Rowley <david.rowley@2ndquadrant.com>)
Список pgsql-hackers
David Rowley <david.rowley@2ndquadrant.com> writes:
> On Sat, 6 Apr 2019 at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> My own thought about how to improve this situation was just to destroy
>> and recreate LockMethodLocalHash at transaction end (or start)
>> if its size exceeded $some-value.  Leaving it permanently bloated seems
>> like possibly a bad idea, even if we get rid of all the hash_seq_searches
>> on it.

> Which I thought was an okay idea.  I think the one advantage that
> would have over making hash_seq_search() faster for large and mostly
> empty tables is that over-sized hash tables are just not very cache
> efficient, and if we don't need it to be that large then we should
> probably consider making it smaller again.

> I've had a go at implementing this and using Amit's benchmark the
> performance looks pretty good. I can't detect any slowdown for the
> general case.

I like the concept ... but the particular implementation, not so much.
It seems way overcomplicated.  In the first place, why should we
add code to copy entries?  Just don't do it except when the table
is empty.  In the second, I think we could probably have a far
cheaper test for how big the table is --- maybe we'd need to
expose some function in dynahash.c, but the right way here is just
to see how many buckets there are.  I don't like adding statistics
counting for this, because it's got basically nothing to do with
what the actual problem is.  (If you acquire and release one lock,
and do that over and over, you don't have a bloat problem no
matter how many times you do it.)

LockMethodLocalHash is special in that it predictably goes to empty
at the end of every transaction, so that de-bloating at that point
is a workable strategy.  I think we'd probably need something more
robust if we were trying to fix this generally for all hash tables.
But if we're going to go with the one-off hack approach, we should
certainly try to keep that hack as simple as possible.

            regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Fetter
Дата:
Сообщение: Re: [PATCH] Implement uuid_version()
Следующее
От: David Rowley
Дата:
Сообщение: Re: Speed up transaction completion faster after many relations areaccessed in a transaction