Re: Autovacuum breakage from a734fd5d1

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Autovacuum breakage from a734fd5d1
Дата
Msg-id CA+TgmoYO5ZmXhtMHJFZx85gMrZVEQ0SkYASjUvtygd8XdRpk+A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Autovacuum breakage from a734fd5d1  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Mon, Nov 28, 2016 at 12:18 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> I don't believe we should be so scared of the possibility of a serious
>> bug that can't be found through any of the ways we normally test that
>> we aren't willing to fix problems we can readily foresee.  I grant
>> that there are some situations where fixing a problem might involve
>> enough risk that we shouldn't attempt it, but this is (or was) pretty
>> straightforward code patterned after existing logic, and I really see
>> no reason to believe that anything that was wrong with it couldn't
>> have been debugged easily enough.
>
> I'm astonished that you think that.  A problem here would be almost
> impossible to diagnose/reproduce, I should think, given that it would
> require at least two different failures (first a backend not cleaning up
> after itself, and then something going wrong in autovac's drop attempt).
> If you had reason to believe there was something broken there, you could
> certainly hack the system enough to force it through that code sequence,
> but that's basically not something that would ever happen in routine
> testing.  So my judgment is that the odds of bugs being introduced here
> and then making it to the field outweighs the potential benefit over the
> long run.  We have enough hard-to-test code already, we do not need to add
> more for hypothetical corner cases.

The easiest way to test this would be to just hack the system catalogs
to be invalid in some way.  I think that frying relnatts or
relpersistence would cause a relcache build failure, which is
basically the kind of thing I'm worried about here.  I've seen plenty
of cases where a system was basically working and the user was
basically happy despite some minor catalog corruption ... until that
"minor" catalog corruption broke an entire subsystem.

A good example is relnamespace.  We've still not eliminated all of the
cases where a DROP SCHEMA concurrent with a CREATE <something> can
result in an object that lives in a nonexistent schema.  So you end up
with this orphaned object that nobody really cares about (after all,
they dropped the schema on purpose) but it doesn't really matter
because everything still runs just fine.  And then, as recently
happened with an actual EnterpriseDB customer, somebody tries to run
pg_dump.  As it turns out, pg_dump fails outright in this situation.
And now suddenly the customer is calling support.  The bad catalog
entries themselves aren't really an issue, but when some other system
like pg_dump or autovacuum chokes on them and *fails completely*
instead of *failing only on the problematic objects* it amplifies the
problem from something that affects only an object that nobody really
cares about into a problem that has a major impact on the whole
system.

> I did think of another argument on your side of this, which is that
> if you imagine that there's a persistent failure to drop some temp table,
> that would effectively disable autovacuum in that database.  Which
> would be bad.  But we could address that at very minimal risk just by
> moving the drop loop to after the vacuuming loop.  I note that the case
> I'm afraid of, a bug in the error-catching logic, could also lead to
> autovacuum becoming entirely disabled if we keep the drops first.

I agree that moving the DROP loop after the other loop has some
appeal, but I see a couple of problems.  One is that, by itself, it
doesn't prevent the cascading-failure problem I mentioned above.  If
the system is close to wrapround and the pg_class scan finds the temp
table that is holding back xmin after the table that can't be dropped
because the catalog is corrupt, then you're back in the situation
where a busted table keeps the system from doing the right thing on an
un-busted table.  The second is that dropping a table doesn't call
vac_update_datfrozenxid(); if we drop a table before running vacuum
operations, the results of the drop will be reflected in any
subsequent datfrozenxid update that may occur.  If we drop it
afterwards, it won't.

Perhaps neither of those things are totally horrible but they're not
especially good, either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Christian Convey
Дата:
Сообщение: Re: Tackling JsonPath support
Следующее
От: Robert Haas
Дата:
Сообщение: Re: [BUG?] pg_event_trigger_ddl_commands() error with ALTER TEXT SEARCH CONFIGURATION