Re: Bulkloading using COPY - ignore duplicates?

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Bulkloading using COPY - ignore duplicates?
Дата
Msg-id 22968.1001943396@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Ответы Re: Bulkloading using COPY - ignore duplicates?  (Lee Kindness <lkindness@csl.co.uk>)
Re: Bulkloading using COPY - ignore duplicates?  (Peter Eisentraut <peter_e@gmx.net>)
Список pgsql-hackers
Lee Kindness <lkindness@csl.co.uk> writes:
> Would this seem a reasonable thing to do? Does anyone rely on COPY
> FROM causing an ERROR on duplicate input?

Yes.  This change will not be acceptable unless it's made an optional
(and not default, IMHO, though perhaps that's negotiable) feature of
COPY.

The implementation might be rather messy too.  I don't much care for the
notion of a routine as low-level as bt_check_unique knowing that the
context is or is not COPY.  We might have to do some restructuring.

> Would:
>  WITH ON_DUPLICATE = CONTINUE|TERMINATE (or similar)
> need to be added to the COPY command (I hope not)?

It occurs to me that skip-the-insert might be a useful option for
INSERTs that detect a unique-key conflict, not only for COPY.  (Cf.
the regular discussions we see on whether to do INSERT first or
UPDATE first when the key might already exist.)  Maybe a SET variable
that applies to all forms of insertion would be appropriate.
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stefan Rindeskar
Дата:
Сообщение: Re: Moving CVS files around?
Следующее
От: Lee Kindness
Дата:
Сообщение: Re: Bulkloading using COPY - ignore duplicates?