Re: UPDATE on two large datasets is very slow

Поиск
Список
Период
Сортировка
От Jonathan Vanasco
Тема Re: UPDATE on two large datasets is very slow
Дата
Msg-id 90925C46-81A6-459F-B7C4-0F34A4C4AF3F@2xlp.com
обсуждение исходный текст
Ответ на Re: UPDATE on two large datasets is very slow  (Scott Marlowe <smarlowe@g2switchworks.com>)
Список pgsql-general
On Apr 3, 2007, at 11:44 AM, Scott Marlowe wrote:

> I can't help but think that the way this application writes data is
> optimized for MySQL's transactionless table type, where lots of
> simultaneous input streams writing at the same time to the same table
> would be death.
>
> Can you step back and work on how the app writes out data, so that it
> opens a persistent connection, and then sends in the updates one at a
> time, committing every couple of seconds while doing so?

I'd look into indexing the tables your update requires in such a way
that you're not doing so many  sequential scans.

I have a system that does many updates on a quickly growing db - 5M
rows last week, 25M this week.

Even simple updates could take forever, because of poor indexing in
relation to fields addressed in the 'where' on the update and foreign
keys.
With some proper updating, the system is super fast again.

So i'd look into creating new indexes and trying to shift the seq
scans into more time-efficient index scans.



В списке pgsql-general по дате отправления:

Предыдущее
От: Thorsten Kraus
Дата:
Сообщение: Re: Webappication and PostgreSQL login roles
Следующее
От: Jaime Silvela
Дата:
Сообщение: COPY FROM - how to identify results?