Re: Bulkloading using COPY - ignore duplicates?
От | Peter Eisentraut |
---|---|
Тема | Re: Bulkloading using COPY - ignore duplicates? |
Дата | |
Msg-id | Pine.LNX.4.30.0112171817590.642-100000@peter.localdomain обсуждение исходный текст |
Ответ на | Re: Bulkloading using COPY - ignore duplicates? (Lee Kindness <lkindness@csl.co.uk>) |
Ответы |
Re: Bulkloading using COPY - ignore duplicates?
|
Список | pgsql-hackers |
Lee Kindness writes: > Consider SELECT DISTINCT - which is the 'duplicate' and which one is > the good one? It's not the same thing. SELECT DISTINCT only eliminates rows that are completely the same, not only equal in their unique contraints. Maybe you're thinking of SELECT DISTINCT ON (). Observe the big warning that the result of that statement are random unless ORDER BY is used. -- But that's not the same thing either. We've never claimed that the COPY input has an ordering assumption. In fact you're asking for a bit more than an ordering assumption, you're saying that the earlier data is better than the later data. I think in a random use case that is more likely *not* to be the case because the data at the end is newer. Btw., here's another concern about this proposed feature: If I do a client-side COPY, how will you sent the "ignored" rows back to the client? -- Peter Eisentraut peter_e@gmx.net
В списке pgsql-hackers по дате отправления: