Re: autovac issue with large number of tables

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: autovac issue with large number of tables
Дата
Msg-id 17808.1597168008@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: autovac issue with large number of tables  (Jim Nasby <nasbyj@amazon.com>)
Ответы Re: autovac issue with large number of tables  (Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>)
Список pgsql-hackers
Jim Nasby <nasbyj@amazon.com> writes:
> Without reading the 100+ emails or the 260k patch, I'm guessing that it 
> won't help because the problem I observed was spending most of it's time in
>    42.62% postgres          [.] hash_search_with_hash_value
> I don't see how moving things to shared memory would help that at all.

So I'm a bit mystified as to why that would show up as the primary cost.
It looks to me like we force a re-read of the pgstats data each time
through table_recheck_autovac(), and it seems like the costs associated
with that would swamp everything else in the case you're worried about.

I suspect that the bulk of the hash_search_with_hash_value costs are
HASH_ENTER calls caused by repopulating the pgstats hash table, rather
than the single read probe that table_recheck_autovac itself will do.
It's still surprising that that would dominate the other costs of reading
the data, but maybe those costs just aren't as well localized in the code.

So I think Kasahara-san's point is that the shared memory stats collector
might wipe out those costs, depending on how it's implemented.  (I've not
looked at that patch in a long time either, so I don't know how much it'd
cut the reader-side costs.  But maybe it'd be substantial.)

In the meantime, though, do we want to do something else to alleviate
the issue?  I realize you only described your patch as a PoC, but I
can't say I like it much:

* Giving up after we've wasted 1000 pgstats re-reads seems like locking
the barn door only after the horse is well across the state line.

* I'm not convinced that the business with skipping N entries at a time
buys anything.  You'd have to make pretty strong assumptions about the
workers all processing tables at about the same rate to believe it will
help.  In the worst case, it might lead to all the workers ignoring the
same table(s).

I think the real issue here is autovac_refresh_stats's insistence that it
shouldn't throttle pgstats re-reads in workers.  I see the point about not
wanting to repeat vacuum work on the basis of stale data, but still ...
I wonder if we could have table_recheck_autovac do two probes of the stats
data.  First probe the existing stats data, and if it shows the table to
be already vacuumed, return immediately.  If not, *then* force a stats
re-read, and check a second time.

BTW, can you provide a test script that reproduces the problem you're
looking at?  The rest of us are kind of guessing at what's happening.

            regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: massive FPI_FOR_HINT load after promote
Следующее
От: Jaime Casanova
Дата:
Сообщение: Re: EDB builds Postgres 13 with an obsolete ICU version