Re: Deleting millions of rows

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Deleting millions of rows
Дата
Msg-id 2239.1233608323@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Deleting millions of rows  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Deleting millions of rows  (Robert Haas <robertmhaas@gmail.com>)
Re: Deleting millions of rows  (Andrew Lazarus <andrew@pillette.com>)
Список pgsql-performance
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian.cox@ca.com> wrote:
>>> How much memory do you have in your machine? What is work_mem set to?
>>
>> 32G; work_mem=64M

> Hmm.  Well then I'm not sure why you're running out of memory,

It's the pending trigger list.  He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory.  Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.

There's a TODO item to spill that list to disk when it gets too large,
but the reason nobody's done it yet is that actually executing that many
FK check trigger events would take longer than you want to wait anyway.

TRUNCATE is the best solution if you want to get rid of the whole table
contents.  If you're deleting very many but not all rows, people tend
to drop the FK constraints and re-establish them afterwards.  Retail
checking is just too slow.

            regards, tom lane

В списке pgsql-performance по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Deleting millions of rows
Следующее
От: Brian Cox
Дата:
Сообщение: Re: Deleting millions of rows