Re: Deleting millions of rows

Поиск
Список
Период
Сортировка
От Dave Dutcher
Тема Re: Deleting millions of rows
Дата
Msg-id 2A08C6D6D6D547DC89EDCED8E1636BBE@tridecap.com
обсуждение исходный текст
Ответ на Deleting millions of rows  (Brian Cox <brian.cox@ca.com>)
Список pgsql-performance
> -----Original Message-----
> From: Brian Cox
> Subject: [PERFORM] Deleting millions of rows
>
> I'm using 8.3.5. Table ts_defects has 48M rows. Through psql:
> delete from ts_defects;
> Result: out of memory/Can't allocate size: 32 I then did 10
> or so deletes to get rid of the rows. Afterwards, inserts
> into or queries on this table performed significantly slower.
> I tried a vacuum analyze, but this didn't help. To fix this,
> I dumped and restored the database.
>
> 1) why can't postgres delete all rows in a table if it has
> millions of rows?
> 2) is there any other way to restore performance other than
> restoring the database?
>
> Thanks,
> Brian

If you are deleting an entire table, then the TRUNCATE command is the way to
go.  TRUNCATE is very fast and leaves no dead rows behind.  The problem with
a normal delete is that the rows are not actually removed from the file.
Once the table is VACUUMED the dead space is marked as available to be
reused, but plain VACUUM doesn't remove any space either.  A VACUUM FULL or
CLUSTER will actually remove dead space, but they can take a while to run.
(I've heard CLUSTER is supposed to be faster than VACUUM FULL)  Another way
is to create a new table with the same definition as the old table, select
the rows you want to keep into the new table, drop the old table, and then
rename the new table to have the old table's name.


Dave


В списке pgsql-performance по дате отправления:

Предыдущее
От: David Wilson
Дата:
Сообщение: Re: Deleting millions of rows
Следующее
От: Brian Cox
Дата:
Сообщение: Re: Deleting millions of rows