Re: How to insert a bulk of data with unique-violations very fast

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: How to insert a bulk of data with unique-violations very fast
Дата
Msg-id AANLkTimwLi6PkFYltlyXecsDQaI6yD3y22WLmWdgwe_R@mail.gmail.com
обсуждение исходный текст
Ответ на How to insert a bulk of data with unique-violations very fast  (Torsten Zühlsdorff <foo@meisterderspiele.de>)
Список pgsql-performance
On Tue, Jun 1, 2010 at 9:03 AM, Torsten Zühlsdorff
<foo@meisterderspiele.de> wrote:
> Hello,
>
> i have a set of unique data which about 150.000.000 rows. Regullary i get a
> list of data, which contains multiple times of rows than the already stored
> one. Often around 2.000.000.000 rows. Within this rows are many duplicates
> and often the set of already stored data.
> I want to store just every entry, which is not within the already stored
> one. Also i do not want to store duplicates. Example:

The standard method in pgsql is to load the data into a temp table
then insert where not exists in old table.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Tom Wilcox
Дата:
Сообщение: Re: requested shared memory size overflows size_t
Следующее
От: Scott Marlowe
Дата:
Сообщение: Re: Autovacuum in postgres.