Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost adjustment
От | Andrei Lepikhov |
---|---|
Тема | Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost adjustment |
Дата | |
Msg-id | 44229379-3901-4cb0-8812-b354ef70d5e2@gmail.com обсуждение исходный текст |
Ответ на | Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost adjustment (David Rowley <dgrowleyml@gmail.com>) |
Ответы |
Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost adjustment
|
Список | pgsql-hackers |
On 20/3/2025 11:37, David Rowley wrote: > I'm also slightly concerned about making struct Memoize bigger. I had > issues with a performance regression [1] for 908a96861 when increasing > the WindowAgg struct size last year and the only way I found to make > it go away was to shuffle the fields around so that the struct size > didn't increase. I think we'll need to see a benchmark of a query that > hits Memoize quite hard with a small cache size to see if the > performance decreases as a result of adding the ndistinct field. It's > unfortunate that we'll not have the luxury of squeezing this double > into padding if we do see a slowdown. I quite frequently need the number of distinct values (or groups) predicted during the Memoize node creation to understand why caching is sometimes employed or not. But I had thought about an alternative way: having an extensible EXPLAIN (thanks to Robert), we may save optimisation-stage data (I have the same necessity in the case of IncrementalSort, for example) and put it into the Plan node on-demand. So, the way I want to go is a Plan::extlist node and create_plan hook, which may allow copying best_path data to the final plan. So, here, we will add a new parameter and avoid touching the core code. But I would give +1 to current approach if it were done in a shorter time. -- regards, Andrei Lepikhov
В списке pgsql-hackers по дате отправления: