Re: Very long deletion time on a 200 GB database

Поиск
Список
Период
Сортировка
От David Kerr
Тема Re: Very long deletion time on a 200 GB database
Дата
Msg-id 4F4D0980.9010104@mr-paradox.net
обсуждение исходный текст
Ответ на Re: Very long deletion time on a 200 GB database  ("Reuven M. Lerner" <reuven@lerner.co.il>)
Ответы Re: Very long deletion time on a 200 GB database  (Gavin Flower <GavinFlower@archidevsys.co.nz>)
Список pgsql-performance
On 02/27/2012 12:08 AM, Reuven M. Lerner wrote:
> Hi, everyone. I wanted to thank you again for your help on the huge
> delete problem that I was experiencing.
>
> After a lot of trial and error, we finally came to the conclusion that
> deleting this much data in the time frame that they need, on
> underpowered hardware that is shared with an application, with each test
> iteration taking 5-9 hours to run (but needing to run in 2-3), is just
> not going to happen. We tried many of the options that people helpfully
> suggested here, but none of them gave us the performance that we needed.
>
> (One of the developers kept asking me how it can possibly take so long
> to delete 200 GB, when he can delete files of that size in much less
> time. I had to explain to him that deleting rows from a database, is a
> far more complicated task, and can't really be compared to deleting a
> few files.)
>
> In the end, it was agreed that we could execute the deletes over time,
> deleting items in the background, or in parallel with the application's
> work. After all, if the disk is filling up at the rate of 2 GB/day, then
> so long as we delete 4 GB/day (which is pretty easy to do), we should be
> fine. Adding RAM or another disk are simply out of the question, which
> is really a shame for a database of this size.
>

Howdy,

I'm coming a little late to the tread but i didn't see anyone propose
some tricks I've used in the past to overcome the slow delete problem.

First - if you can drop your FKs, delete, re-create your FKs you'll find
that you can delete an amazing amount of data very quickly.

second - if you can't do that - you can try function that loops and
deletes a small amount at a time, this gets around the deleting more
data then you can fit into memory problem. It's still slow but just not
as slow.

third - don't delete, instead,
create new_table as select * from old_table where <records are not the
ones you want to delete>
rename new_table to old_table;
create indexes and constraints
drop old_table;

fourth - I think some folks mentioned this, but just for completeness,
partition the table and make sure that your partition key is such that
you can just drop an entire partition.

Hope that helps and wasn't redundant.

Dave

В списке pgsql-performance по дате отправления:

Предыдущее
От: Claudio Freire
Дата:
Сообщение: Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: [PERFORM] Re: 回复: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?
Следующее
От: Jon Nelson
Дата:
Сообщение: problems with set_config, work_mem, maintenance_work_mem, and sorting