Re: Very poor performance loading 100M of sql data using copy

Поиск
Список
Период
Сортировка
От Shane Ambler
Тема Re: Very poor performance loading 100M of sql data using copy
Дата
Msg-id 48162A67.3090500@Sheeky.Biz
обсуждение исходный текст
Ответ на Re: Very poor performance loading 100M of sql data using copy  (John Rouillard <rouilj@renesys.com>)
Ответы Re: Very poor performance loading 100M of sql data using copy  (John Rouillard <rouilj@renesys.com>)
Список pgsql-performance
John Rouillard wrote:

> We can't do this as we are backfilling a couple of months of data
> into tables with existing data.

Is this a one off data loading of historic data or an ongoing thing?


>>> The only indexes we have to drop are the ones on the primary keys
>>>  (there is one non-primary key index in the database as well).

If this amount of data importing is ongoing then one thought I would try
is partitioning (this could be worthwhile anyway with the amount of data
you appear to have).
Create an inherited table for the month being imported, load the data
into it, then add the check constraints, indexes, and modify the
rules/triggers to handle the inserts to the parent table.



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

В списке pgsql-performance по дате отправления:

Предыдущее
От: Gregory Stark
Дата:
Сообщение: Re: Benchmarks WAS: Sun Talks about MySQL
Следующее
От: Tino Wildenhain
Дата:
Сообщение: Re: Best practice to load a huge table from ORACLE to PG