Re: BUG #4204: COPY to table with FK has memory leak

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Re: BUG #4204: COPY to table with FK has memory leak
Дата
Msg-id 1212039704.4489.724.camel@ebony.site
обсуждение исходный текст
Ответ на Re: BUG #4204: COPY to table with FK has memory leak  (Gregory Stark <stark@enterprisedb.com>)
Список pgsql-hackers
On Wed, 2008-05-28 at 18:17 -0400, Gregory Stark wrote:
> "Simon Riggs" <simon@2ndquadrant.com> writes:
> 
> > AFAICS we must aggregate the trigger checks. We would need a special
> > property of triggers that allowed them to be aggregated when two similar
> > checks arrived. We can then use hash aggregation to accumulate them. We
> > might conceivably need to spill to disk also, since the aggregation may
> > not always be effective. But in most cases the tables against which FK
> > checks are made are significantly smaller than the tables being loaded.
> > Once we have hash aggregated them, that is then the first part of a hash
> > join to the target table.
> 
> Well we can't aggregate them as they're created because later modifications
> could delete or update the original records. The SQL spec requires that FK
> checks be effective at the end of the command. 

Well, thats what we need to do. We just need to find a way...

Currently, we store trigger entries by htid. I guess we need to
aggregate them on the actual values looked up.

The SQL spec also says that the contents of the FK check table should be
taken as at the start of the command, so we should be safe to aggregate
the values prior to the check.

As already suggested in work on Read Only Tables, we could optimise them
away to being constraint checks.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Pavan Deolasee"
Дата:
Сообщение: Re: Avoiding second heap scan in VACUUM
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: Avoiding second heap scan in VACUUM