poor VACUUM performance on large tables

Поиск
Список
Период
Сортировка
От Jan Peterson
Тема poor VACUUM performance on large tables
Дата
Msg-id 72e966b00509032316166ff0cf@mail.gmail.com
обсуждение исходный текст
Ответы Re: poor VACUUM performance on large tables
Re: poor VACUUM performance on large tables
Список pgsql-performance
Hello,

We have been experiencing poor performance of VACUUM in our production
database.  Relevant details of our implementation are as follows:

1.  We have a database that grows to about 100GB.
2.  The database is a mixture of large and small tables.
3.  Bulk data (stored primarily in pg_largeobject, but also in various
TOAST tables) comprises about 45% of our data.
4.  Some of our small tables are very active, with several hundred
updates per hour.
5.  We have a "rolling delete" function that purges older data on a
periodic basis to keep our maximum database size at or near 100GB.

Everything works great until our rolling delete kicks in.  Of course,
we are doing periodic VACUUMS on all tables, with frequent VACUUMs on
the more active tables.  The problem arises when we start deleting the
bulk data and have to VACUUM pg_largeobject and our other larger
tables.  We have seen VACUUM run for several hours (even tens of
hours).  During this VACUUM process, our smaller tables accumulate
dead rows (we assume because of the transactional nature of the
VACUUM) at a very rapid rate.  Statistics are also skewed during this
process and we have observed the planner choosing sequential scans on
tables where it is obvious that an index scan would be more efficient.

We're looking for ways to improve the performance of VACUUM.  We are
already experimenting with Hannu Krosing's patch for VACUUM, but it's
not really helping (we are still faced with doing a database wide
VACUUM about once every three weeks or so as we approach the
transaction id rollover point... this VACUUM has been measured at 28
hours in an active environment).

Other things we're trying are partitioning tables (rotating the table
that updates happen to and using a view to combine the sub-tables for
querying).  Unfortunately, we are unable to partition the
pg_largeobject table, and that table alone can take up 40+% of our
database storage.  We're also looking at somehow storing our large
objects externally (as files in the local file system) and
implementing a mechanism similar to Oracle's bfile functionality.  Of
course, we can't afford to give up the transactional security of being
able to roll back if a particular update doesn't succeed.

Does anyone have any suggestions to offer on good ways to proceed
given our constraints?  Thanks in advance for any help you can
provide.

        -jan-
--
Jan L. Peterson
<jan.l.peterson@gmail.com>

В списке pgsql-performance по дате отправления:

Предыдущее
От: Michael Fuhr
Дата:
Сообщение: Re: Improving performance of a query
Следующее
От: Ernst Einstein
Дата:
Сообщение: Re: Poor performance on HP Package Cluster