Re: Very long deletion time on a 200 GB database

Поиск
Список
Период
Сортировка
От Gavin Flower
Тема Re: Very long deletion time on a 200 GB database
Дата
Msg-id 4F4DB831.90906@archidevsys.co.nz
обсуждение исходный текст
Ответ на Re: Very long deletion time on a 200 GB database  (David Kerr <dmk@mr-paradox.net>)
Список pgsql-performance
On 29/02/12 06:06, David Kerr wrote:
> On 02/27/2012 12:08 AM, Reuven M. Lerner wrote:
>> Hi, everyone. I wanted to thank you again for your help on the huge
>> delete problem that I was experiencing.
>>
>> After a lot of trial and error, we finally came to the conclusion that
>> deleting this much data in the time frame that they need, on
>> underpowered hardware that is shared with an application, with each test
>> iteration taking 5-9 hours to run (but needing to run in 2-3), is just
>> not going to happen. We tried many of the options that people helpfully
>> suggested here, but none of them gave us the performance that we needed.
>>
>> (One of the developers kept asking me how it can possibly take so long
>> to delete 200 GB, when he can delete files of that size in much less
>> time. I had to explain to him that deleting rows from a database, is a
>> far more complicated task, and can't really be compared to deleting a
>> few files.)
>>
>> In the end, it was agreed that we could execute the deletes over time,
>> deleting items in the background, or in parallel with the application's
>> work. After all, if the disk is filling up at the rate of 2 GB/day, then
>> so long as we delete 4 GB/day (which is pretty easy to do), we should be
>> fine. Adding RAM or another disk are simply out of the question, which
>> is really a shame for a database of this size.
>>
>
> Howdy,
>
> I'm coming a little late to the tread but i didn't see anyone propose
> some tricks I've used in the past to overcome the slow delete problem.
>
> First - if you can drop your FKs, delete, re-create your FKs you'll
> find that you can delete an amazing amount of data very quickly.
>
> second - if you can't do that - you can try function that loops and
> deletes a small amount at a time, this gets around the deleting more
> data then you can fit into memory problem. It's still slow but just
> not as slow.
>
> third - don't delete, instead,
> create new_table as select * from old_table where <records are not the
> ones you want to delete>
> rename new_table to old_table;
> create indexes and constraints
> drop old_table;
>
> fourth - I think some folks mentioned this, but just for completeness,
> partition the table and make sure that your partition key is such that
> you can just drop an entire partition.
>
> Hope that helps and wasn't redundant.
>
> Dave
>
  Hi,

I think your first and third points are very obvious - but only after I
had read them!  :-)

Your third point is not bad either!

Brilliant simplicity, I hope I can remember them if I run into a similar
situation.


Thanks,
Gavin




В списке pgsql-performance по дате отправления:

Предыдущее
От: Lew
Дата:
Сообщение: Re: Very long deletion time on a 200 GB database
Следующее
От: "McGehee, Robert"
Дата:
Сообщение: Inefficient min/max against partition (ver 9.1.1)