Re: Deleting millions of rows

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Deleting millions of rows
Дата
Msg-id 603c8f070902031608l4e934f37rb031401811c8f02a@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Deleting millions of rows  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Deleting millions of rows  (Gregory Stark <stark@enterprisedb.com>)
Список pgsql-performance
On Tue, Feb 3, 2009 at 4:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Alvaro Herrera <alvherre@commandprompt.com> writes:
>> Robert Haas escribió:
>>> Have you ever given any thought to whether it would be possible to
>>> implement referential integrity constraints with statement-level
>>> triggers instead of row-level triggers?
>
>> Well, one reason we haven't discussed this is because our per-statement
>> triggers are too primitive yet -- we don't have access to the list of
>> acted-upon tuples.  As soon as we have that we can start discussing this
>> optimization.
>
> I think the point is that at some number of tuples it's better to forget
> about per-row tests at all, and instead perform the same whole-table
> join that would be used to validate the FK from scratch.  The mechanism
> we lack is not one to pass the row list to a statement trigger, but one
> to smoothly segue from growing a list of per-row entries to dropping
> that list and queueing one instance of a statement trigger instead.

That's good if you're deleting most or all of the parent table, but
what if you're deleting 100,000 values from a 10,000,000 row table?
In that case maybe I'm better off inserting all of the deleted keys
into a side table and doing a merge or hash join between the side
table and the child table...

...Robert

В списке pgsql-performance по дате отправления:

Предыдущее
От: Andrew Lazarus
Дата:
Сообщение: Re: Deleting millions of rows
Следующее
От: Rohan Pethkar
Дата:
Сообщение: Getting error while running DBT2 test for PostgreSQL