Re: random observations while testing with a 1,8B row table

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: random observations while testing with a 1,8B row table
Дата
Msg-id 8043.1142020450@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: random observations while testing with a 1,8B row table  (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>)
Ответы Re: random observations while testing with a 1,8B row table  (Steve Atkins <steve@blighty.com>)
Re: random observations while testing with a 1,8B row table  (Stefan Kaltenbrunner <stefan@kaltenbrunner.cc>)
Список pgsql-hackers
Stefan Kaltenbrunner <stefan@kaltenbrunner.cc> writes:
>>> 3. vacuuming this table - it turned out that VACUUM FULL is completly
>>> unusable on a table(which i actually expected before) of this size not
>>> only to the locking involved but rather due to a gigantic memory
>>> requirement and unbelievable slowness.

> sure, that was mostly meant as an experiment, if I had to do this on a
> production database I would most likely use CLUSTER to get the desired
> effect (which in my case was purely getting back the diskspace wasted by
> dead tuples)

Yeah, the VACUUM FULL algorithm is really designed for situations where
just a fraction of the rows have to be moved to re-compact the table.
It might be interesting to teach it to abandon that plan and go to a
CLUSTER-like table rewrite once the percentage of dead space is seen to
reach some suitable level.  CLUSTER has its own disadvantages though
(2X peak disk space usage, doesn't work on core catalogs, etc).
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stefan Kaltenbrunner
Дата:
Сообщение: Re: random observations while testing with a 1,8B row table
Следующее
От: "Luke Lonergan"
Дата:
Сообщение: Re: random observations while testing with a 1,8B row