Re: Delete query takes exorbitant amount of time

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Re: Delete query takes exorbitant amount of time
Дата
Msg-id 1112109602.11750.972.camel@localhost.localdomain
обсуждение исходный текст
Ответ на Re: Delete query takes exorbitant amount of time  (Stephan Szabo <sszabo@megazone.bigpanda.com>)
Список pgsql-performance
On Tue, 2005-03-29 at 05:50 -0800, Stephan Szabo wrote:
> On Tue, 29 Mar 2005, Simon Riggs wrote:
>
> > On Tue, 2005-03-29 at 01:48 -0700, Karim A Nassar wrote:
> > > > Each value has 1/13th of the table, which is too many rows per value to
> > > > make an IndexScan an efficient way of deleting rows from the table.
> > >
> > > But, the original question was that the delete that was taking a long time
> > > was on a different table. I tried to delete 150 rows from a table with 750
> > > rows, which is FK referenced from this large table. If I understand
> > > correctly, Tom suggested that the length of time was due to a sequential
> > > scan being done on the large table for each value being deleted from the
> > > small one.
> >
> > > For this FK check, there only need be one referring id to invalidate the
> > > delete. ISTM that for any delete with a FK reference, the index could
> > > always be used to search for a single value in the referring table
> > > (excepting very small tables). Why then must a sequential scan be
> > > performed in this case, and/or in general?
> >
> > My understanding was that you were doing a DELETE on the smaller table
> > and that this was doing a DELETE on the measurement table because you
> > had the FK defined as ON DELETE CASCADE. You are right - only a single
> > row is sufficient to RESTRICT the DELETE. But for an ON UPDATE/DELETE
> > action of CASCADE then you will want to touch all rows referenced, so a
> > SeqScan is a perfectly valid consequence of such actions.
> > I think now that you are using the default action, rather than
> > specifically requesting CASCADE?
> >
> > Stephan, Tom:
> > The SQL generated for RI checking by the RI triggers currently applies a
> > limit at execution time, not at prepare time. i.e. there is no LIMIT
> > clause in the SQL.
> >
> > We know whether the check will be limit 1 or limit 0 at prepare time, so
> > why not add a LIMIT clause to the SQL so it changes the plan, not just
> > the number of rows returned when the check query executes?
>
> Because IIRC, FOR UPDATE and LIMIT at least historically didn't play
> nicely together, so you could sometimes get a result where if the first
> row was locked, the FOR UPDATE would wait on it, but if it was deleted by
> the other transaction you could get 0 rows back in the trigger.
>

Well, sorry to ask more...

...but surely we only need FOR UPDATE clause if we are performing a
CASCADE action? whereas we only want the LIMIT 1 clause if we are NOT
performing a CASCADE action? That way the two clauses are mutually
exclusive and the problem you outline should never (need to) occur.

The current code doesn't seem to vary the check query according to the
requested FK action...

Best Regards, Simon Riggs


В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Delete query takes exorbitant amount of time
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: Delete query takes exorbitant amount of time