Re: Performance Problem

Поиск
Список
Период
Сортировка
От Uwe C. Schroeder
Тема Re: Performance Problem
Дата
Msg-id 200610130155.06831.uwe@oss4u.com
обсуждение исходный текст
Ответ на Re: Performance Problem  (Martijn van Oosterhout <kleptog@svana.org>)
Ответы Re: Performance Problem  ("A. Kretschmer" <andreas.kretschmer@schollglas.com>)
Список pgsql-general
On Friday 13 October 2006 01:22, Martijn van Oosterhout wrote:
> >   1) I have a performance problem as I am trying to insert around 60
> >   million rows to a table which is partitioned. So first I copied the
> >   .csv file which contains data, with COPY command to a temp table
> >   which was quick. It took only 15 to 20 minutes. Now I am inserting
> >   data from temp table to original table using insert into org_table
> >   (select * from temp_table); which is taking more than an hour & is
> >   still inserting. Is there an easy way to do this?
>
> Does the table you're inserting into have indexes or foreign keys?
> Either of those slow down loading considerably. One commen workaround
> is to drop the indexes and constraints, load the data and re-add them.

Why do you COPY the data into a temporary table just to do a "insert into
org_table  (select * from temp_table);" ? Since you're copying ALL records
anyways, why don't you just copy the data into the "org_table" directly?

Also look for the "autocommit" setting. If autocommit is on, every insert is a
transaction on it's own - leading to a lot of overhead. Turning autocommit
off and running the inserts in batches of - say 1000 inserts per transaction
- will increase speed considerably.


    UC

--
Open Source Solutions 4U, LLC    1618 Kelly St
Phone:  +1 707 568 3056        Santa Rosa, CA 95401
Cell:   +1 650 302 2405        United States
Fax:    +1 707 568 6416

В списке pgsql-general по дате отправления:

Предыдущее
От: Rafal Pietrak
Дата:
Сообщение: Re: looping through query to update column
Следующее
От: "A. Kretschmer"
Дата:
Сообщение: Re: Performance Problem