Re: Why do we let autovacuum give up?

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Why do we let autovacuum give up?
Дата
Msg-id CA+TgmobDhARHasd5yWebe+VYEf2sSbJZaHocjdoGR+Brs7EGrQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Why do we let autovacuum give up?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Thu, Jan 23, 2014 at 7:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Andres Freund <andres@2ndquadrant.com> writes:
>> On 2014-01-23 19:29:23 -0500, Tom Lane wrote:
>>> I concur with the other reports that the main problem in this test case is
>>> just that the default cost delay settings throttle autovacuum so hard that
>>> it has no chance of keeping up.  If I reduce autovacuum_vacuum_cost_delay
>>> from the default 20ms to 2ms, it seems to keep up quite nicely, on my
>>> machine anyway.  Probably other combinations of changes would do it too.
>
>>> Perhaps we need to back off the default cost delay settings a bit?
>>> We've certainly heard more than enough reports of table bloat in
>>> heavily-updated tables.  A system that wasn't hitting the updates as hard
>>> as it could might not need this, but on the other hand it probably
>>> wouldn't miss the I/O cycles from a more aggressive autovacuum, either.
>
>> Yes, I think adjusting the default makes sense, most setups that have
>> enough activity that costing plays a role have to greatly increase the
>> values. I'd rather increase the cost limit than reduce cost delay so
>> drastically though, but that's admittedly just gut feeling.
>
> Well, I didn't experiment with intermediate values, I was just trying
> to test the theory that autovac could keep up given less-extreme
> throttling.  I'm not taking any position on just where we need to set
> the values, only that what we've got is probably too extreme.

So, Greg Smith proposed what I think is a very useful methodology for
assessing settings in this area: figure out what it works out to in
MB/s.  If we assume we're going to read and dirty every page we
vacuum, and that this will take negligible time of itself so that the
work is dominated by the sleeps, the default settings work out to
200/(10 + 20) pages every 20ms, or 2.67MB/s.  Obviously, the rate will
be 3x higher if the pages don't need to be dirtied, and higher still
if they're all in cache, but considering the way the visibility map
works, it seems like a good bet that we WILL need to dirty most of the
pages that we look at - either they've got dead tuples and need
clean-up, or they don't and need to be marked all-visible.

A corollary of this is that if you're dirtying heap pages faster than
a few megabytes per second, autovacuum, at least with default
settings, is not going to keep up.  And if you assume that each write
transaction dirties at least one heap page, any volume of write
transactions in excess of a few hundred per second will meat that
criteria.  Which is really not that much; a single core can do over
1000 tps with synchronous_commit=off, or if there's a BBWC that can
absorb it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: Change authentication error message (patch)
Следующее
От: David Fetter
Дата:
Сообщение: Re: CREATE FOREIGN TABLE ( ... LIKE ... )