Re: Deleting millions of rows

Поиск
Список
Период
Сортировка
От Brian Cox
Тема Re: Deleting millions of rows
Дата
Msg-id 49876D28.5080403@ca.com
обсуждение исходный текст
Ответ на Deleting millions of rows  (Brian Cox <brian.cox@ca.com>)
Ответы Re: Deleting millions of rows  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-performance
Tom Lane [tgl@sss.pgh.pa.us] wrote:
> It's the pending trigger list.  He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory.  Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
Yes, this is a 32 bit Postgres running on a 32 bit Linux. I assume that
the 2 triggers are due to the 2 "on delete cascade" FKs. Thanks for
explaining this bit of a mystery.

> TRUNCATE is the best solution if you want to get rid of the whole table
> contents.  If you're deleting very many but not all rows, people tend
> to drop the FK constraints and re-establish them afterwards.  Retail
> checking is just too slow.
Thanks also to you (and several others) for reminding me of TRUNCATE.
This will definitely work for what I was trying to do: reset this table
for more testing.

In production, the table on which I ran DELETE FROM grows constantly
with old data removed in bunches periodically (say up to a few 100,000s
of rows [out of several millions] in a bunch). I'm assuming that
auto-vacuum/analyze will allow Postgres to maintain reasonable
performance for INSERTs and SELECTs on it; do you think that this is a
reasonable assumption?

Thanks,
Brian


В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Deleting millions of rows
Следующее
От: Scott Marlowe
Дата:
Сообщение: Re: Deleting millions of rows