Re: Bulkloading using COPY - ignore duplicates?

Поиск
Список
Период
Сортировка
От Daniel Kalchev
Тема Re: Bulkloading using COPY - ignore duplicates?
Дата
Msg-id 200201040736.JAA29349@dcave.digsys.bg
обсуждение исходный текст
Ответ на Re: Bulkloading using COPY - ignore duplicates?  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-hackers
>>>Bruce Momjian said:> Mikheev, Vadim wrote:> > > > Bruce Momjian <pgman@candle.pha.pa.us> writes:> > > > > Seems
nestedtransactions are not required if we load> > > > > each COPY line in its own transaction, like we do with> > > > >
INSERTfrom pg_dump.> > > > > > > > I don't think that's an acceptable answer.  Consider> > > > > > Oh, very good point.
"Requires nested transactions" added to TODO.> > > > Also add performance issue with per-line-commit...> > > > Also-II
-there is more common name for required feature - savepoints.> > OK, updated TODO to prefer savepoints term.
 

Now, how about the same functionality for

INSERT into table1 SELECT * from table2 ... WITH ERRORS;

Should allow the insert to complete, even if table1 has unique indexes and we 
try to insert duplicate rows. Might save LOTS of time in bulkloading scripts 
not having to do single INSERTs.

Guess all this will be available in 7.3?

Daniel



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: RC1 time?
Следующее
От: Oleg Bartunov
Дата:
Сообщение: Re: RC1 time?