Re: Reducing the memory footprint of large sets of pending triggers

Поиск
Список
Период
Сортировка
От Gregory Stark
Тема Re: Reducing the memory footprint of large sets of pending triggers
Дата
Msg-id 878wscrde4.fsf@oxford.xeocode.com
обсуждение исходный текст
Ответ на Re: Reducing the memory footprint of large sets of pending triggers  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Simon Riggs <simon@2ndQuadrant.com> writes:
>> A much better objective would be to remove duplicate trigger calls, so
>> there isn't any build up of trigger data in the first place. That would
>> apply only to immutable functions. RI checks certainly fall into that
>> category.
>
> They're hardly "duplicates": each event is for a different tuple.
>
> For RI checks, once you get past a certain percentage of the table it'd
> be better to throw away all the per-tuple events and do a full-table
> verification a la RI_Initial_Check().  I've got no idea about a sane
> way to make that happen, though.

One idea I had was to accumulate the data in something like a tuplestore and
then perform the RI check as a join between a materialize node and the target
table. Then we could use any join type whether a hash join, nested loop, merge
join, etc depending on how many there on each side are and how many are
distinct values.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support!


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: Reducing the memory footprint of large sets of pending triggers
Следующее
От: Michael Meskes
Дата:
Сообщение: Email/lists setup