Re: [HACKERS] More detail on settings for pgavd?

Поиск
Список
Период
Сортировка
От Josh Berkus
Тема Re: [HACKERS] More detail on settings for pgavd?
Дата
Msg-id 200311200918.30838.josh@agliodbs.com
обсуждение исходный текст
Ответ на More detail on settings for pgavd?  (Josh Berkus <josh@agliodbs.com>)
Ответы Re: [HACKERS] More detail on settings for pgavd?  (Chester Kustarz <chester@arbor.net>)
Re: [HACKERS] More detail on settings for pgavd?  ("Matthew T. O'Connor" <matthew@zeut.net>)
Список pgsql-performance
Matthew,

> For small tables,  you don't need to vacuum too often.  In the testing I
> did a small table ~100 rows, didn't really show significant performance
> degredation until it had close to 1000 updates.

This is accounted for by using the "threshold" value.   That way small tables
get vacuumed less often. However, the way large tables work is very different
and I think your strategy shows a lack of testing on large active tables.

> For large tables,
> vacuum is so expensive, that you don't want to do it very often, and
> scanning the whole table when there is only 5% wasted space is not very
> helpful.

5% is probably too low, you're right ... in my experience, performance
degredation starts to set in a 10-15% updates to, for example, a 1.1 million
row table, particularly since users tend to request the most recently updated
rows.   As long as we have the I/O issues that Background Writer and ARC are
intended to solve, though, I can see being less agressive on the defaults;
perhaps 20% or 25%.   If you wait until 110% of a 1.1 million row table is
updated, though, that vaccuum will take an hour or more.

Additionally, you are not thinking of this in terms of an overall database
maintanence strategy.   Lazy Vacuum needs to stay below the threshold of the
Free Space Map (max_fsm_pages) to prevent creeping bloat from setting in to
your databases.   With proper configuration of pg_avd, vacuum_mem and FSM
values, it should be possible to never run a VACUUM FULL again, and as of 7.4
never run an REINDEX again either.

But this means running vacuum frequently enough that your max_fsm_pages
threshold is never reached.   Which for a large database is going to have to
be more frequently than 110% updates, because setting 20,000,000
max_fsm_pages will eat your RAM.

> Yes, the I set the defaults a little high perhaps so as to err on the
> side of caution.  I didn't want people to say pg_autovacuum kills the
> performance of my server.  A small table will get vacuumed, just not
> until it has reached the threshold.  So a table with 100 rows, will get
> vacuumed after 1200 updates / deletes.

Ok, I can see that for small tables.

> In my testing it showed that
> there was no major performance problems  until you reached several
> thousand updates / deletes.

Sure.  But several thousand updates can be only 2% of a very large table.

> HUH?  analyze is very very cheap compared to vacuum.  Why not do it more
> often?

Because nothing is cheap if it's not needed.

Analyze is needed only as often as the *aggregate distribution* of data in the
tables changes.   Depending on the application, this could be frequently, but
far more often (in my experience running multiple databases for several
clients) the data distribution of very large tables changes very slowly over
time.

One client's database, for example, that I have running VACUUM on chron
scripts  runs on this schedule for the main tables:
VACUUM only: twice per hour
VACUUM ANALYZE: twice per day

On the other hand, I've another client's database where most activity involves
updates to entire classes of records.   They run ANALYZE at the end of every
transaction.

So if you're going to have a seperate ANALYZE schedule at all, it should be
slightly less frequent than VACUUM for large tables.   Either that, or drop
the idea, and simplify pg_avd by running VACUUM ANALYZE all the time instead
of having 2 seperate schedules.

BUT .... now I see how you arrived at the logic you did.  If you're testing
only on small tables, and not vacuuming them until they reach 110% updates,
then you *would* need to analyze more frequently.     This is because of your
threshold value ... you'd want to analyze the small table as soon as even 30%
of its rows changed.

So the answer is to dramatically lower the threshold for the small tables.

> What I think I am hearing is that people would like very much to be able
> to tweak the settings of pg_autovacuum for individual tables / databases
> etc.

Not from me you're not.   Though that would be nice, too.

So, my suggested defaults based on our conversation above:

Vacuum threshold: 1000 records
Vacuum scale factor:  0.2
Analyze threshold:  50 records
Analyze scale factor: 0.3

--
Josh Berkus
Aglio Database Solutions
San Francisco

В списке pgsql-performance по дате отправления:

Предыдущее
От: Shridhar Daithankar
Дата:
Сообщение: Re: [HACKERS] More detail on settings for pgavd?
Следующее
От: Josh Berkus
Дата:
Сообщение: Re: More detail on settings for pgavd?