Re: BUG #14237: Terrible performance after accidentally running 'drop index' for index still being created
| От | Tom Lane |
|---|---|
| Тема | Re: BUG #14237: Terrible performance after accidentally running 'drop index' for index still being created |
| Дата | |
| Msg-id | 24589.1468256920@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: BUG #14237: Terrible performance after accidentally running 'drop index' for index still being created (David Waller <dwaller@yammer-inc.com>) |
| Ответы |
Re: BUG #14237: Terrible performance after accidentally
running 'drop index' for index still being created
|
| Список | pgsql-bugs |
David Waller <dwaller@yammer-inc.com> writes:
> Thank you for the detailed explanation. This all seems very sensible, and
> reasonable behaviour from Postgres. Yet... it still 'allowed' me to shoot myself
> painfully in the foot. User error, I agree, yet people make mistakes - could
> Postgres behave more gracefully?
Well, there are always tradeoffs. You could choose to run with a
non-infinite setting of lock_timeout, which would have caused the DROP to
fail after waiting a second or two (or whatever you set the timeout to
be). That would move the denial of service over to the problematic DDL,
which might be a good tradeoff for your environment. But not everybody is
going to think that query failure is a "more graceful" solution.
> For example, would it be at all feasible for Postgres to handle DDL statements
> differently from regular requests? In this example it was pointless for DROP
> INDEX to take any locks while there was already another DDL statement (CREATE
> INDEX) running. Could it have been added to a queue of DDL statements against
> that table and not attempted to take a lock until CREATE INDEX completed and
> DROP INDEX then reached the head of the queue?
This is handwaving: the DROP already was in a lock queue. I really doubt
there are any easy fixes that won't create as many problems as they solve.
regards, tom lane
В списке pgsql-bugs по дате отправления: