Re: [GENERAL] Insert large number of records

Поиск
Список
Период
Сортировка
От Alban Hertroys
Тема Re: [GENERAL] Insert large number of records
Дата
Msg-id CAF-3MvNa9UJ4ZcctzNo4pTrXebEGKyAYzNiJdqtKLHGRb+zrJA@mail.gmail.com
обсуждение исходный текст
Ответ на R: [GENERAL] Insert large number of records  (Job <Job@colliniconsulting.it>)
Ответы R: [GENERAL] Insert large number of records  (Job <Job@colliniconsulting.it>)
Список pgsql-general
On 20 September 2017 at 07:42, Job <Job@colliniconsulting.it> wrote:
> We use a "temporary" table, populated by pg_bulkload - it takes few minutes in this first step.
> Then, from the temporary table, datas are transferred by a trigger that copy the record into the production table.
> But *this step* takes really lots of time (sometimes also few hours).
> There are about 10 millions of record.

Perhaps the problem isn't entirely on the writing end of the process.

How often does this trigger fire? Once per row inserted into the
"temporary" table, once per statement or only after the bulkload has
finished?

Do you have appropriate indices on the temporary table to guarantee
quick lookup of the records that need to be copied to the target
table(s)?

> We cannot use pg_bulkload to load directly data into production table since pg_bulkload would lock the Whole table,
and"COPY" command is slow and would not care about table partitioning (COPY command fire partitioned-table triggers).
 

As David already said, inserting directly into the appropriate
partition is certainly going to be faster. It removes a check on your
partitioning conditions from the query execution plan; if you have
many partitions, that adds up, because the database needs to check
that condition among all your partitions for every row.

Come to think of it, I was assuming that the DB would stop checking
other partitions once it found a suitable candidate, but now I'm not
so sure it would. There may be good reasons not to stop, for example
if we can partition further into sub-partitions. Anybody?


Since you're already using a trigger, it would probably be more
efficient to query your "temporary" table for batches belonging to the
same partition and insert those into the partition directly, one
partition at a time.

Even better would be if your bulkload could already be organised such
that all the data in the "temporary" table can indiscriminately be
inserted into the same target partition. That though depends a bit on
your setup - at some point the time saved at one end gets consumed on
the other or it takes even longer there.

Well, I think I've thrown enough ideas around for now ;)

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

В списке pgsql-general по дате отправления:

Предыдущее
От: Moreno Andreo
Дата:
Сообщение: Re: [SPAM] Re: [GENERAL] VM-Ware Backup of VM safe?
Следующее
От: John R Pierce
Дата:
Сообщение: Re: [GENERAL] libpq confusion