Re: How to best use 32 15k.7 300GB drives?

Поиск
Список
Период
Сортировка
От Віталій Тимчишин
Тема Re: How to best use 32 15k.7 300GB drives?
Дата
Msg-id AANLkTikWy+U5qK88EhefVo1c-jRTe4gs-01X528XtFKn@mail.gmail.com
обсуждение исходный текст
Ответ на Re: How to best use 32 15k.7 300GB drives?  (Scott Carey <scott@richrelevance.com>)
Ответы Re: How to best use 32 15k.7 300GB drives?
Список pgsql-performance


2011/1/28 Scott Carey <scott@richrelevance.com>


On 1/28/11 9:28 AM, "Stephen Frost" <sfrost@snowman.net> wrote:

>* Scott Marlowe (scott.marlowe@gmail.com) wrote:
>> There's nothing wrong with whole table updates as part of an import
>> process, you just have to know to "clean up" after you're done, and
>> regular vacuum can't fix this issue, only vacuum full or reindex or
>> cluster.
>
>Just to share my experiences- I've found that creating a new table and
>inserting into it is actually faster than doing full-table updates, if
>that's an option for you.

I wonder if postgres could automatically optimize that, if it thought that
it was going to update more than X% of a table, and HOT was not going to
help, then just create a new table file for XID's = or higher than the one
making the change, and leave the old one for old XIDs, then regular VACUUM
could toss out the old one if no more transactions could see it.


I was thinking if a table file could be deleted if it has no single live row. And if this could be done by vacuum. In this case vacuum on table that was fully updated recently could be almost as good as cluster - any scan would skip such non-existing files really fast. Also almost no disk space would be wasted. 

--
Best regards,
 Vitalii Tymchyshyn

В списке pgsql-performance по дате отправления:

Предыдущее
От: yazan suleiman
Дата:
Сообщение: Re: postgres 9 query performance
Следующее
От: Robert Haas
Дата:
Сообщение: Re: pgbench - tps for Postgresql-9.0.2 is more than tps for Postgresql-8.4.1