[GENERAL] Re: How does Postgres estimate the memory needed forsorting/aggregating
От | Thomas Kellerer |
---|---|
Тема | [GENERAL] Re: How does Postgres estimate the memory needed forsorting/aggregating |
Дата | |
Msg-id | o6b757$5q5$1@blaine.gmane.org обсуждение исходный текст |
Ответ на | Re: [GENERAL] How does Postgres estimate the memory needed forsorting/aggregating (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Список | pgsql-general |
Tomas Vondra schrieb am 25.01.2017 um 22:46: >> I guess this is based on the column statistics stored in pg_stats, but I >> am not sure: >> > > It is based on the average length of values in that column, yes. Thanks for confirming that. I assume this is taken from pg_stats.avg_width ? > I'm not sure what you mean by 'dynamically resize'. The above > decision is pretty much how planner decides whether to use hash > aggregate or group aggregate. If we estimate that the hash aggregate > will fit into work_mem, the planner will consider both possibilities. > If the estimate says hash aggregate would not fit into work_mem, > we'll only consider group aggregate, because that can work with very > little memory. > > At execution time we'll only use as much memory as actually needed. > The trouble is that if we under-estimated the amount of memory, > there's no way back. The "under-estimation" is what I am referring to with "dynamically resize". What happens if the planner assumes 100kb but in reality it needs 100MB? Thomas
В списке pgsql-general по дате отправления: