Re: Optimising Foreign Key checks

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: Optimising Foreign Key checks
Дата
Msg-id CAM-w4HPsgKt9eUag8yB44sPnSwS2rUtyN_pF-212e5QxP-2j0A@mail.gmail.com
обсуждение исходный текст
Ответ на Optimising Foreign Key checks  (Simon Riggs <simon@2ndQuadrant.com>)
Ответы Re: Optimising Foreign Key checks  (Hannu Krosing <hannu@2ndQuadrant.com>)
Список pgsql-hackers
On Sat, Jun 1, 2013 at 9:41 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> COMMIT;
> The inserts into order_line repeatedly execute checks against the same
> ordid. Deferring and then de-duplicating the checks would optimise the
> transaction.
>
> Proposal: De-duplicate multiple checks against same value. This would
> be implemented by keeping a hash of rows that we had already either
> inserted and/or locked as the transaction progresses, so we can use
> the hash to avoid queuing up after triggers.


Fwiw the reason we don't do that now is that the rows might be later
deleted within the same transaction (or even the same statement I
think). If they are then the trigger needs to be skipped for that row
but still needs to happen for other rows. So you need to do some kind
of book-keeping to keep track of that. The easiest way was just to do
the check independently for each row. I think there's a comment about
this in the code.

I think you're right that this should be optimized because in the vast
majority of cases you don't end up deleting rows and we're currently
doing lots of redundant checks. But you need to make sure you don't
break the unusual case entirely.

-- 
greg



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: pg_rewind, a tool for resynchronizing an old master after failover
Следующее
От: Hannu Krosing
Дата:
Сообщение: Re: Optimising Foreign Key checks