Re: Speed up transaction completion faster after many relations areaccessed in a transaction

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Speed up transaction completion faster after many relations areaccessed in a transaction
Дата
Msg-id CAKJS1f-rCUtQNpbNRf-pVWC+iuyc6EU28jDKMOboNV+hO2We3w@mail.gmail.com
обсуждение исходный текст
Ответ на RE: Speed up transaction completion faster after many relations areaccessed in a transaction  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Ответы RE: Speed up transaction completion faster after many relations areaccessed in a transaction  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Список pgsql-hackers
On Mon, 22 Jul 2019 at 12:48, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:
>
> From: David Rowley [mailto:david.rowley@2ndquadrant.com]
> > I went back to the drawing board on this and I've added some code that counts
> > the number of times we've seen the table to be oversized and just shrinks
> > the table back down on the 1000th time.  6.93% / 1000 is not all that much.
>
> I'm afraid this kind of hidden behavior would appear mysterious to users.  They may wonder "Why is the same query
fastat first in the session (5 or 6 times of execution), then gets slower for a while, and gets faster again?  Is there
somethingto tune?  Am I missing something wrong with my app (e.g. how to use prepared statements)?"  So I prefer v5. 

I personally don't think that's true.  The only way you'll notice the
LockReleaseAll() overhead is to execute very fast queries with a
bloated lock table.  It's pretty hard to notice that a single 0.1ms
query is slow. You'll need to execute thousands of them before you'll
be able to measure it, and once you've done that, the lock shrink code
will have run and the query will be performing optimally again.

I voice my concerns with v5 and I wasn't really willing to push it
with a known performance regression of 7% in a fairly valid case. v6
does not suffer from that.

> > Of course, not all the extra overhead might be from rebuilding the table,
> > so here's a test with the updated patch.
>
> Where else does the extra overhead come from?

hash_get_num_entries(LockMethodLocalHash) == 0 &&
+ hash_get_max_bucket(LockMethodLocalHash) >
+ LOCKMETHODLOCALHASH_SHRINK_THRESHOLD)

that's executed every time, not every 1000 times.

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Rowley
Дата:
Сообщение: Re: POC: converting Lists into arrays
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: pg_receivewal documentation