performance problem

Поиск
Список
Период
Сортировка
От Rick Gigger
Тема performance problem
Дата
Msg-id 01b201c3ae14$92bd3c00$0700a8c0@trogdor
обсуждение исходный текст
Ответ на Point-in-time data recovery - v.7.4  (Rafael Martinez Guerrero <r.m.guerrero@usit.uio.no>)
Ответы Re: performance problem
Re: performance problem
Re: performance problem
Список pgsql-general
I am currently trying to import a text data file without about 45,000
records.  At the end of the import it does an update on each of the 45,000
records.  Doing all of the inserts completes in a fairly short amount of
time (about 2 1/2 minutes).  Once it gets to the the updates though it slows
to a craw.  After about 10 minutes it's only done about 3000 records.

Is that normal?  Is it because it's inside such a large transaction?  Is
there anything I can do to speed that up.  It seems awfully slow to me.

I didn't think that giving it more shared buffers would help but I tried
anyway.  It didn't help.

I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
of stuff but it didn't speed up the updates at all.

I am using a dual 800mhz xeon box with 2 gb of ram.  I've tried anywhere
from about 16,000 to 65000 shared buffers.

What other factors are involved here?


В списке pgsql-general по дате отправления:

Предыдущее
От: CoL
Дата:
Сообщение: Re: indexing with lower(...) -> queries are not optimised very well
Следующее
От: Karsten Hilbert
Дата:
Сообщение: Re: uploading files