Do we need vacuuming when tables are regularly dropped?

Поиск
Список
Период
Сортировка
От Peter Kovacs
Тема Do we need vacuuming when tables are regularly dropped?
Дата
Msg-id b6e8f2e80809290237p7685dbcfm9d5c886ba225b956@mail.gmail.com
обсуждение исходный текст
Ответы Re: Do we need vacuuming when tables are regularly dropped?  ("Peter Kovacs" <maxottovonstirlitz@gmail.com>)
Re: Do we need vacuuming when tables are regularly dropped?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-admin
Hi,

We have a number of automated performance tests (to test our own code)
involving PostgreSQL. Test cases are supposed to drop and recreate
tables each time they run.

The problem is that some of the tests show a linear performance
degradation overtime. (We have data for three months back in the
past.) We have established that some element(s) of our test
environment must be the culprit for the degradation. As rebooting the
test machine didn't revert speeds to baselines recorded three months
ago, we have turned our attention to the database as the only element
of the environment which is persistent across reboots. Recreating the
entire PGSQL cluster did cause speeds to revert to baselines.

I understand that vacuuming solves performance problems related to
"holes" in data files created as a result of tables being updated. Do
I understand correctly that if tables are dropped and recreated at the
beginning of each test case, holes in data files are reclaimed, so
there is no need for vacuuming from a performance perspective?

I will double check whether the problematic test cases do indeed
always drop their tables, but assuming they do, are there any factors
in the database (apart from table updates) that can cause a linear
slow-down with repetitive tasks?

Thanks
Peter

В списке pgsql-admin по дате отправления:

Предыдущее
От: Oleg Bartunov
Дата:
Сообщение: Re: [GENERAL] PostgreSQL Cache
Следующее
От: "Peter Kovacs"
Дата:
Сообщение: Re: Do we need vacuuming when tables are regularly dropped?