work_mem / maintenance_work_mem maximums

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема work_mem / maintenance_work_mem maximums
Дата
Msg-id 20100920165111.GP26232@tamriel.snowman.net
обсуждение исходный текст
Ответы Re: work_mem / maintenance_work_mem maximums  (Bruce Momjian <bruce@momjian.us>)
Список pgsql-hackers
Greetings,
 After watching a database import go abysmally slow on a pretty beefy box with tons of RAM, I got annoyed and went to
huntdown why in the world PG wasn't using but a bit of memory.  Turns out to be a well known and long-standing issue:
 
 http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg101139.html
 Now, we could start by fixing guc.c to correctly have the max value for these be MaxAllocSize/1024, for starters, then
atleast our users would know when they set a higher value it's not going to be used. That, in my mind, is a pretty
clearbug fix.  Of course, that doesn't help us poor data-warehousing bastards with 64G+ machines.
 
 Sooo..  I don't know much about what the limit is or why it's there, but based on the comments, I'm wondering if we
couldjust move the limit to a more 'sane' place than the-function-we-use-to-allocate.  If we need a hard limit due to
TOAST,let's put it there, but I'm hopeful we could work out a way to get rid of this limit in repalloc and that we can
letsorts and the like (uh, index creation) use what memory the user has decided it should be able to.
 
     Thanks,
    Stephen

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Git conversion status
Следующее
От: Robert Haas
Дата:
Сообщение: Re: bg worker: general purpose requirements