Re: Custom explain options

Поиск
Список
Период
Сортировка
От Konstantin Knizhnik
Тема Re: Custom explain options
Дата
Msg-id de6001a6-3f58-4ee8-abef-b046908b04a4@garret.ru
обсуждение исходный текст
Ответ на Re: Custom explain options  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Ответы Re: Custom explain options
Список pgsql-hackers


On 15/01/2024 5:08 pm, Tomas Vondra wrote:

My patch does not care about prefetching internal index pages. Yes, it's
a limitation, but my assumption is the internal pages are maybe 0.1% of
the index, and typically very hot / cached. Yes, if the index is not
used very often, this may be untrue. But I consider it a possible future
improvement, for some other patch. FWIW there's a prefetching patch for
inserts into indexes (which only prefetches just the index leaf pages).

We have to prefetch pages at height-1 level (parents of leave pages) for IOS because otherwise prefetch pipeline is broken at each transition to next leave page.
When we start with new leave patch we have to fill prefetch ring from the scratch which certainly has negative impact on performance.


Not sure I understand what this is about. The patch simply calls the
index AM function index_getnext_tid() enough times to fill the prefetch
queue. It does not prefetch the next index leaf page, it however does
prefetch the heap pages. It does not "stall" at the boundary of the
index leaf page, or something.

Ok, now I fully understand your approach. Looks really elegant and works for all indexes.
There is still issue with IOS and seqscan.




Another challenge - is how far we should prefetch (as far as I
understand both your and our approach using dynamically extended
prefetch window)

By dynamic extension of prefetch window you mean the incremental growth
of the prefetch distance from 0 to effective_io_concurrency? 

Yes

I don't
think there's a better solution.

I tried one more solution: propagate information about expected number of fetched rows to AM. Based on this information it is possible to choose proper prefetch distance.
Certainly it is not quote precise: we can scan large number rows but filter only few of them. This is why this approach was not committed in Neon.
But I still think that using statistics for determining prefetch window is not so bad idea. May be it needs better thinking.



There might be additional information that we could consider (e.g.
expected number of rows for the plan, earlier executions of the scan,
...) but each of these has a failure more.

I wrote reply above before reading next fragment:)
So I have already tried it.

I haven't tried with pgvector, but I don't see why my patch would not
work for all index AMs that cna return TID.


Yes, I agree. But it will be efficient only if getting next TIS is cheap  - it is located on the same leaf page.



I have also tried to implement alternative approach for prefetch based
on access statistic.
It comes from use case of seqscan of table with larger toasted records.
So for each record we have to extract its TOAST data.
It is done using standard index scan, but unfortunately index prefetch
doesn't help much here: there is usually just one TOAST segment and so
prefetch just have no chance to do something useful. But as far as heap
records are accessed sequentially, there is good chance that toast table
will also be accessed mostly sequentially. So we just can count number
of sequential requests to each relation and if ratio or seq/rand 
accesses is above some threshold we can prefetch next pages of this
relation. This is really universal approach but ... working mostly for
TOAST table.

Are you're talking about what works / doesn't work in neon, or about
postgres in general?

I'm not sure what you mean by "one TOAST segment" and I'd also guess
that if both tables are accessed mostly sequentially, the read-ahead
will do most of the work (in postgres).

Yes, I agree: in case of vanilla Postgres OS will do read-ahead. But not in Neon.
By one TOAST segment I mean "one TOAST record - 2kb.


It's probably true that as we do a separate index scan for each TOAST-ed
value, that can't really ramp-up the prefetch distance fast enough.
Maybe we could have a mode where we start with the full distance?

Sorry, I do not understand. Especially in this case large prefetch window is undesired.
Most of records fits in 2kb, so we need to fetch onely one head (TOAST) record per TOAST index search.


This is exactly the difference. In Neon such approach doesn't work.
Each backend maintains it's own prefetch ring. And if prefetched page
was not actually received, then the whole pipe is lost.
I.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it
has to consume responses for 1,5,10 and issue another request for page 2.
Instead of improving speed we are just doing extra job.
So each backend should prefetch only those pages which it is actually
going to read.
This is why prefetch approach used in Postgres for example for parallel
bitmap heap scan doesn't work for Neon.
If you do `posic_fadvise` then prefetched page is placed in OS cache and
can be used by any parallel worker.
But in Neon each parallel worker should be given its own range of pages
to scan and prefetch only this pages.

I still don't quite see/understand the difference. I mean, even in
postgres each backend does it's own prefetches, using it's own prefetch
ring. But I'm not entirely sure about the neon architecture differences

I am not speaking about your approach. It will work with Neon as well.
I am describing why implementation of prefetch for heap bitmap scan doesn't work for Neon:
it issues prefetch requests for pages which never accessed by this parallel worker.

Does this mean neon can do prefetching from the executor in principle?

Could you perhaps describe a situation where the bitmap can prefetching
(as implemented in Postgres) does not work for neon?


I am speaking about prefetch implementation in nodeBitmpapHeapScan. Prefetch iterator is not synced with normal iterator, i.e. they can return different pages.


Well, maybe you could try doing rewriting it now, so that you can give
some feedback to the patch. I'd appreciate that.

I will try.


Best regards,
Konstantin

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Matthias van de Meent
Дата:
Сообщение: Re: cleanup patches for incremental backup
Следующее
От: Joe Conway
Дата:
Сообщение: Re: [17] CREATE SUBSCRIPTION ... SERVER