Re: extreme memory use when loading in a lot of data

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: extreme memory use when loading in a lot of data
Дата
Msg-id 16582.1085177715@sss.pgh.pa.us
обсуждение исходный текст
Ответ на extreme memory use when loading in a lot of data  (Vivek Khera <khera@kcilink.com>)
Список pgsql-general
Vivek Khera <khera@kcilink.com> writes:
> Since this is historical data, I'm actually merging a couple of dumps
> that span the time range.  I've dealt with eliminating any conflicting
> data (ie, clashing unique keys) but I'm not 100% sure that the foreign
> key constraints are all met.  Thus, when loading the data from the
> second dump, I am leaving the FK triggers on.

I think you'd be better off to drop the FK constraint, import, and
re-add the constraint.  The out-of-memory problem is probably due to the
list of deferred trigger firings (one per tuple, or more if you have
multiple FKs to check).  Even if you had enough memory, you'd not have
enough patience for all those retail FK checks to occur after the COPY
finishes.

At least in 7.4, adding an FK constraint on an existing table should
produce a better plan than the retail checks involved in adding rows to
a table with an existing FK constraint.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Stephan Szabo
Дата:
Сообщение: Re: extreme memory use when loading in a lot of data
Следующее
От: Jeff Davis
Дата:
Сообщение: Re: Am I locking more than I need to?