Re: Bulkloading using COPY - ignore duplicates?
От | Vadim Mikheev |
---|---|
Тема | Re: Bulkloading using COPY - ignore duplicates? |
Дата | |
Msg-id | 000001c194f4$37c84f50$ed2db841@home обсуждение исходный текст |
Ответ на | Re: Bulkloading using COPY - ignore duplicates? (Daniel Kalchev <daniel@digsys.bg>) |
Ответы |
Re: Bulkloading using COPY - ignore duplicates?
Re: Bulkloading using COPY - ignore duplicates? |
Список | pgsql-hackers |
> Now, how about the same functionality for > > INSERT into table1 SELECT * from table2 ... WITH ERRORS; > > Should allow the insert to complete, even if table1 has unique indexes and we > try to insert duplicate rows. Might save LOTS of time in bulkloading scripts > not having to do single INSERTs. 1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL block and define for what exceptions (errors) what actions should be taken (ie IGNORE for NON_UNIQ_KEY error, etc). 2. For INSERT ... SELECT statement one can put DISTINCT in select' target list. > Guess all this will be available in 7.3? We'll see. Vadim
В списке pgsql-hackers по дате отправления: