Re: INSERT performance deteriorates quickly during a large import

Поиск
Список
Период
Сортировка
От Krasimir Hristozov \(InterMedia Ltd\)
Тема Re: INSERT performance deteriorates quickly during a large import
Дата
Msg-id 007801c822ec$e6f4faa0$0400000a@imediadev.com
обсуждение исходный текст
Ответ на INSERT performance deteriorates quickly during a large import  ("Krasimir Hristozov \(InterMedia Ltd\)" <krasi@imedia-dev.com>)
Список pgsql-general
Thanks to all who responded. Using COPY instead of INSERT really solved the problem - the whole process took about 1h 20min on an indexed table, with constraints (which is close to our initial expectations). We're performing some additional tests now. I'll post some more observations when finished.
----- Original Message -----
Sent: Friday, November 09, 2007 1:52 PM
Subject: Re: INSERT performance deteriorates quickly during a large import

Hello Krasimir,

You got a lot of good advices above and I would like to add another one:

d) Make sure of your PHP code is not recursive. As you said the memory is stable so I think your method is iterative.
A recursive method certainly will increase a little time for each insert using more memory.
But iterative methods must be correctly to be called just once and maybe your code is running much more than need.

Pay attention on Tomas advices, and after that (I agree with Cris) "there should be no reason for loading data to get more costly as
the size of the table increases" - Please check your code.

I did some experiences long time ago with 40000 data with a lot of BLOBs. I used PHP code using SELECT/INSERT from Postgres to Postgres and the time wasn't constant but wasn't so bad as your case.  (And I didn't the Tomas a, b and c advices)

Good Luck
--
Márcio Geovani Jasinski

В списке pgsql-general по дате отправления:

Предыдущее
От: "Albe Laurenz"
Дата:
Сообщение: Re: "Resurrected" data files - problem?
Следующее
От: Ted Byers
Дата:
Сообщение: Re: Optimal time series sampling.