Re: Improve COPY performance for large data sets

Поиск
Список
Период
Сортировка
От Dimitri Fontaine
Тема Re: Improve COPY performance for large data sets
Дата
Msg-id 200809101917.40204.dfontaine@hi-media.com
обсуждение исходный текст
Ответ на Improve COPY performance for large data sets  (Ryan Hansen <ryan.hansen@brightbuilders.com>)
Список pgsql-performance
Hi,

Le mercredi 10 septembre 2008, Ryan Hansen a écrit :
> One thing I'm experiencing some trouble with is running a COPY of a
> large file (20+ million records) into a table in a reasonable amount of
> time.  Currently it's taking about 12 hours to complete on a 64 bit
> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB
> drive.  I don't seem to get any improvement running the same operation
> on a dual opteron dual-core, 16 GB server.

You single SATA disk is probably very busy going from reading source file to
writing data. You could try raising checkpoint_segments to 64 or more, but a
single SATA disk won't give you high perfs for IOs. You're getting what you
payed for...

You could maybe ease the disk load by launching the COPY from a remote (local
netword) machine, and while at it if the file is big, try parallel loading
with pgloader.

Regards,
--
dim

Вложения

В списке pgsql-performance по дате отправления:

Предыдущее
От: Bill Moran
Дата:
Сообщение: Re: Improve COPY performance for large data sets
Следующее
От: "Scott Marlowe"
Дата:
Сообщение: Re: too many clog files