Re: Slow duplicate deletes

Поиск
Список
Период
Сортировка
От DrYSG
Тема Re: Slow duplicate deletes
Дата
Msg-id 1330980227869-5538858.post@n5.nabble.com
обсуждение исходный текст
Ответ на Re: Slow duplicate deletes  (Merlin Moncure <mmoncure@gmail.com>)
Ответы Re: Slow duplicate deletes  (Michael Wood <esiotrot@gmail.com>)
Re: Slow duplicate deletes  (Merlin Moncure <mmoncure@gmail.com>)
Список pgsql-novice
One point I might not have made clear. The reason I want to remove duplicates
is that the column "data_object.unique_id" became non-unique (someone added
duplicate rows). So I added the bigSeriel (idx) to uniquely identify the
rows, and I was using the SELECT MIN(idx) and GroupBy to pick just one of
the rows that became duplicated.

I am going to try out some of your excellent suggestions. I will report back
on how they are working.

One idea that was given to me was the following (what do you think Merlin?)

CREATE TABLE portal.new_metatdata AS
select distinct on (data_object.unique_id) * FROM portal.metadata;

Or something of this ilk should be faster because it only need to do a
sort on data_object.unique_id and then an insert. After you have
verified the results you can do:

BEGIN;
ALTER TABLE portal.metatdata rename TO portal.new_metatdata_old;
ALTER TABLE portal.new_metatdata rename TO portal.metatdata_old;
COMMIT;


--
View this message in context: http://postgresql.1045698.n5.nabble.com/Slow-duplicate-deletes-tp5537818p5538858.html
Sent from the PostgreSQL - novice mailing list archive at Nabble.com.

В списке pgsql-novice по дате отправления:

Предыдущее
От: Merlin Moncure
Дата:
Сообщение: Re: Slow duplicate deletes
Следующее
От: "VARTAK, SATISH CTR DFAS"
Дата:
Сообщение: postgreSQL odbc driver for Sun Solaris