Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers
| От | Tom Lane |
|---|---|
| Тема | Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers |
| Дата | |
| Msg-id | 1712093.1766854937@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers (Richard Guo <guofenglinux@gmail.com>) |
| Ответы |
Re: BUG #19363: PostgreSQL shared memory exhaustion during query execution involving views and parallel workers
|
| Список | pgsql-bugs |
Richard Guo <guofenglinux@gmail.com> writes:
> The executor needs to build a hash table for 9.7 billion rows. That
> requires a lot of memory. In practice, the executor splits the work
> into 8192 batches, that still results in 1.2 million rows per batch.
> To manage that many rows, the executor allocated 4194304 buckets.
> This means the executor needs to allocate 32 MB just for the bucket
> array, assuming you are on a 64-bit system. I guess your available
> shared memory was less than 32 MB at that moment.
On my machine, the query does complete, but it's ridiculously slow:
about 15 sec to compute an empty result. AFAICT nearly all of that
is being spent in hash table setup.
What I'm wondering is why this plan was accepted at all. Does PHJ
not consider work_mem or a similar limit on hash table size?
regards, tom lane
В списке pgsql-bugs по дате отправления: