Re: Prevent concurrent DROP SCHEMA when certain objects are being initially created in the namespace

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: Prevent concurrent DROP SCHEMA when certain objects are being initially created in the namespace
Дата
Msg-id F0AA0013-6869-40EF-857E-11276C3FDF69@anarazel.de
обсуждение исходный текст
Ответ на Re: Prevent concurrent DROP SCHEMA when certain objects are being initially created in the namespace  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Prevent concurrent DROP SCHEMA when certain objects are being initially created in the namespace  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers

On September 4, 2018 9:11:25 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>Michael Paquier <michael@paquier.xyz> writes:
>> On Tue, Sep 04, 2018 at 03:09:21PM -0700, Jimmy Yih wrote:
>>> When an empty namespace is being initially populated with certain
>objects,
>>> it is possible for a DROP SCHEMA operation to come in and delete the
>>> namespace without using CASCADE.
>
>> It seems to me that we are missing some dependency tracking in some
>of
>> those cases.
>
>No, I think Jimmy is right: it's a race condition.  The pg_depend entry
>would produce the right result, except that it's not committed yet so
>the DROP SCHEMA doesn't see it.
>
>The bigger question is whether we want to do anything about this.
>Historically we've not bothered with locking on database objects that
>don't represent storage (ie, relations and databases).  If we're going
>to
>take this seriously, then we should for example also acquire lock on
>any
>function that's referenced in a view definition, to ensure it doesn't
>go
>away before the view is committed and its dependencies become visible.
>Likewise for operators, opclasses, collations, text search objects, you
>name it.  And worse, we'd really need this sort of locking even in
>vanilla
>DML queries, since objects could easily go away before the query is
>done.
>
>I think that line of thought leads to an enormous increase in locking
>overhead, for which we'd get little if any gain in usability.  So my
>inclination is to make an engineering judgment that we won't fix this.

Haven't we already significantly started down this road, to avoid a lot of the "tuple concurrently updated" type
errors?Would expanding this a git further really be that noticeable? 

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: pg_verify_checksums failure with hash indexes
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: pg_verify_checksums failure with hash indexes