Re: Optimizing large data loads

Поиск
Список
Период
Сортировка
От John Wells
Тема Re: Optimizing large data loads
Дата
Msg-id 52979.172.16.3.2.1123336687.squirrel@devsea.net
обсуждение исходный текст
Ответ на Re: Optimizing large data loads  (Richard Huxton <dev@archonet.com>)
Список pgsql-general
Richard Huxton said:
> You don't say what the limitations of Hibernate are. Usually you might
> look to:
> 1. Use COPY not INSERTs

Not an option, unfortunately.

> 2. If not, block INSERTS into BEGIN/COMMIT transactions of say 100-1000

We're using 50/commit...we can easily up this I suppose.

> 3. Turn fsync off

Done.

> 4. DROP/RESTORE constraints/triggers/indexes while you load your data

Hmmm...will have to think about this a bit...not a bad idea but not sure
how we can make it work in our situation.

> 5. Increase sort_mem/work_mem in your postgresql.conf when recreating
> indexes etc.
> 6. Use multiple processes to make sure the I/O is maxed out.

5. falls in line with 4.  6. is definitely doable.

Thanks for the suggestions!

John


В списке pgsql-general по дате отправления:

Предыдущее
От: "Frank Millman"
Дата:
Сообщение: Case sensitivity
Следующее
От: Tom Lane
Дата:
Сообщение: Re: timestamp default values