Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker

Поиск
Список
Период
Сортировка
От Thomas Munro
Тема Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker
Дата
Msg-id CAEepm=08PiGBm2NjhugAd-wfPk=z7OMVNtGyS2ghbOYQZORxAg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker  (Frits Jalvingh <jal@etc.to>)
Ответы Re: BUG #15225: [XX000] ERROR: invalid DSA memory alloc request size1073741824 / Where: parallel worker  (Thomas Munro <thomas.munro@enterprisedb.com>)
Список pgsql-bugs
On Thu, Jun 7, 2018 at 4:19 AM, Frits Jalvingh <jal@etc.to> wrote:
>>work_mem has always been like that.
> I know. My observation is that this behavior has more of a bad effect with
> newer postgresses: because of the increased parallelism (and the apparent OK
> to every node to grab work_mem when they see fit) newer version make way
> less good use of memory than older versions because you have to decrease the
> parameter. That 2GB value I had worked fine on 10, and helped a lot with
> speeding up my workload. Now for the same workload I have to put it on
> 512MB, so all queries that just do one sort are slower - and memory is used
> less well. It means that in all the system might perform less well despite
> parallelism because you have to prevent aborting queries.

It's a problem alright, and many people think we should address it.
It's not exactly obvious how though...  Here's a recent thread on the
topic:

https://www.postgresql.org/message-id/flat/CAH2-WzmNwV%3DLfDRXPsmCqgmm91mp%3D2b4FvXNF%3DcCvMrb8YFLfQ%40mail.gmail.com

> ERROR: invalid DSA memory alloc request size 1073741824

The problem is that I failed to constrain nbuckets to fit in
MaxAllocSize when increasing it due to load factor, and for now
Parallel Hash doesn't use DSA_ALLOC_HUGE (so essentially any attempt
to allocate 1GB+ is assumed to be crazy and rejected by dsa.c, just
like the equivalent non-parallel code).  Here is a proposed fix that
constrains it.

Longer term, I don't see why we should limit large memory systems to <
1GB of hash buckets, but that doesn't seem like a change that belongs
in a pgsql-bugs thread.  I'll put that on a list of hash join
ideas/improvements for discussion on the pgsql-hackers list.  In your
case I'd doubt the limit really hurts since work_mem was clearly set
too high for the system and "64 million buckets ought to be enough"
for a smaller setting on your machine.  Clearly a 1TB box could make
good use of more buckets than that though.

-- 
Thomas Munro
http://www.enterprisedb.com

Вложения

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Jeremy Schneider
Дата:
Сообщение: Re: BUG #14820: Standby crash with "could not access status oftransaction" (for track_commit_timestamp)
Следующее
От: PG Bug reporting form
Дата:
Сообщение: BUG #15231: After Upgrade from 9.3.23 to 9.6.9 getting ERROR: foundxmin 598 from before relfrozenxid 68569164