Re: Reducing the memory footprint of large sets of pending triggers
| От | Tom Lane |
|---|---|
| Тема | Re: Reducing the memory footprint of large sets of pending triggers |
| Дата | |
| Msg-id | 27581.1224938919@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | Re: Reducing the memory footprint of large sets of pending triggers (Simon Riggs <simon@2ndQuadrant.com>) |
| Ответы |
Re: Reducing the memory footprint of large sets of
pending triggers
Re: Reducing the memory footprint of large sets of pending triggers |
| Список | pgsql-hackers |
Simon Riggs <simon@2ndQuadrant.com> writes:
> A much better objective would be to remove duplicate trigger calls, so
> there isn't any build up of trigger data in the first place. That would
> apply only to immutable functions. RI checks certainly fall into that
> category.
They're hardly "duplicates": each event is for a different tuple.
For RI checks, once you get past a certain percentage of the table it'd
be better to throw away all the per-tuple events and do a full-table
verification a la RI_Initial_Check(). I've got no idea about a sane
way to make that happen, though.
regards, tom lane
В списке pgsql-hackers по дате отправления: