Re: Optimizing large data loads

Поиск
Список
Период
Сортировка
От Richard Huxton
Тема Re: Optimizing large data loads
Дата
Msg-id 42F365F2.8050007@archonet.com
обсуждение исходный текст
Ответ на Optimizing large data loads  ("John Wells" <jb@sourceillustrated.com>)
Ответы Re: Optimizing large data loads  ("John Wells" <jb@sourceillustrated.com>)
Список pgsql-general
John Wells wrote:
> Hi guys,
>
> We have a Java process that uses Hibernate to load approximately 14 GB of
> data.  One a dual-proc 2.4 GHZ Xeon with 2048 MB RAM, it's currently
> taking over 13 hours to load (PostgreSQL 7.4.8).  We're flushing from
> hibernate every 50 records.
>
> I've turned fsync to false in postgresql.conf, and we've turned autocommit
> off in our code.  Is there anything else I could try to temporarily speed
> up inserts?

You don't say what the limitations of Hibernate are. Usually you might
look to:
1. Use COPY not INSERTs
2. If not, block INSERTS into BEGIN/COMMIT transactions of say 100-1000
3. Turn fsync off
4. DROP/RESTORE constraints/triggers/indexes while you load your data
5. Increase sort_mem/work_mem in your postgresql.conf when recreating
indexes etc.
6. Use multiple processes to make sure the I/O is maxed out.

Any of those do-able?

--
   Richard Huxton
   Archonet Ltd

В списке pgsql-general по дате отправления:

Предыдущее
От: Michael Fuhr
Дата:
Сообщение: Re: How to write jobs in postgresql
Следующее
От: Michael Fuhr
Дата:
Сообщение: Re: How to write jobs in postgresql