Re: performance problem

Поиск
Список
Период
Сортировка
От Mike Mascari
Тема Re: performance problem
Дата
Msg-id 3FBA8911.5060805@mascari.com
обсуждение исходный текст
Ответ на performance problem  ("Rick Gigger" <rick@alpinenetworking.com>)
Список pgsql-general
Rick Gigger wrote:

> I am currently trying to import a text data file without about 45,000
> records.  At the end of the import it does an update on each of the 45,000
> records.  Doing all of the inserts completes in a fairly short amount of
> time (about 2 1/2 minutes).  Once it gets to the the updates though it slows
> to a craw.  After about 10 minutes it's only done about 3000 records.
>
> Is that normal?  Is it because it's inside such a large transaction?  Is
> there anything I can do to speed that up.  It seems awfully slow to me.
>
> I didn't think that giving it more shared buffers would help but I tried
> anyway.  It didn't help.
>
> I tried doing a analyze full on it (vacuumdb -z -f) and it cleaned up a lot
> of stuff but it didn't speed up the updates at all.
>
> I am using a dual 800mhz xeon box with 2 gb of ram.  I've tried anywhere
> from about 16,000 to 65000 shared buffers.
>
> What other factors are involved here?

It is difficult to say without knowing either the definition of the
relation(s) or the update queries involved. Are there indexes being
created after the import that would allow PostgreSQL to locate the
rows being updated quickly, or is the update an unqualified update (no
WHERE clause) that affects all tuples?

EXPLAIN ANALYZE is your friend...

Mike Mascari
mascarm@mascari.com



В списке pgsql-general по дате отправления:

Предыдущее
От: Christopher Murtagh
Дата:
Сообщение: 7.4RC2 vs 7.4
Следующее
От: "Dann Corbit"
Дата:
Сообщение: Re: performance problem