Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От Ludwik Dylag
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id 2fe468a21001070838n40f1cbb5y19b34c5c4748a62d@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Massive table (500M rows) update nightmare  (Leo Mannhart <leo.mannhart@beecom.ch>)
Список pgsql-performance
I would suggest:
1. turn off autovacuum
1a. ewentually tune db for better performace for this kind of operation (cant not help here)
2. restart database
3. drop all indexes
4. update
5. vacuum full table
6. create indexes
7. turn on autovacuum

Ludwik


2010/1/7 Leo Mannhart <leo.mannhart@beecom.ch>
Kevin Grittner wrote:
> Leo Mannhart <leo.mannhart@beecom.ch> wrote:
>
>> You could also try to just update the whole table in one go, it is
>> probably faster than you expect.
>
> That would, of course, bloat the table and indexes horribly.  One
> advantage of the incremental approach is that there is a chance for
> autovacuum or scheduled vacuums to make space available for re-use
> by subsequent updates.
>
> -Kevin
>

ouch...
thanks for correcting this.
... and forgive an old man coming from Oracle ;)

Leo

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
Ludwik Dyląg

В списке pgsql-performance по дате отправления:

Предыдущее
От: Craig James
Дата:
Сообщение: Re: Air-traffic benchmark
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Massive table (500M rows) update nightmare