How to avoid vacuuming a huge logging table

Поиск
Список
Период
Сортировка
От Mark Stosberg
Тема How to avoid vacuuming a huge logging table
Дата
Msg-id eri1gd$29bq$1@news.hub.org
обсуждение исходный текст
Ответы Re: How to avoid vacuuming a huge logging table  ("Greg Sabino Mullane" <greg@turnstep.com>)
Re: How to avoid vacuuming a huge logging table  ("Greg Sabino Mullane" <greg@turnstep.com>)
Список pgsql-performance
Our application has a table that is only logged to, and infrequently
used for reporting. There generally no deletes and updates.

Recently, the shear size (an estimated 36 million rows) caused a serious
problem because it prevented a "vacuum analyze" on the whole database
from finishing in a timely manner.

As I understand, a table with this usage pattern wouldn't need to be
vacuumed anyway.

I'm looking for general advice from people who have faced the same
issue. I'm looking at a number of alternatives:

1. Once a month, we could delete and archive old rows, for possible
re-import later if we need to report on them. It would seem this would
need to be done as proper insert statements for re-importing. (Maybe
there is a solution for that with table partitioning? )

2. We could find a way to exclude the table for vacuuming, and let it
grow even larger. Putting the table in it's own database would
accomplish that, but it would nice to avoid the overhead of a second
database connection.

3. Take a really different approach. Log in CSV format to text files
instead, And only import the date ranges we need "on demand" if a report
is requested on the data.

Thanks for any tips.

    Mark

В списке pgsql-performance по дате отправления:

Предыдущее
От: Mark Stosberg
Дата:
Сообщение: Re: How to debug performance problems
Следующее
От: Dan Harris
Дата:
Сообщение: General advice on user functions