Re: pg_dump additional options for performance

Поиск
Список
Период
Сортировка
От Dimitri Fontaine
Тема Re: pg_dump additional options for performance
Дата
Msg-id 200802271119.28655.dfontaine@hi-media.com
обсуждение исходный текст
Ответ на Re: pg_dump additional options for performance  ("Joshua D. Drake" <jd@commandprompt.com>)
Список pgsql-hackers
Le mardi 26 février 2008, Joshua D. Drake a écrit :
> > Think 100GB+ of data that's in a CSV or delimited file.  Right now
> > the best import path is with COPY, but it won't execute very fast as
> > a single process.  Splitting the file manually will take a long time
> > (time that could be spend loading instead) and substantially increase
> > disk usage, so the ideal approach would figure out how to load in
> > parallel across all available CPUs against that single file.
>
> You mean load from position? That would be very, very cool.

Did I mention pgloader now does exactly this when configured like
this:http://pgloader.projects.postgresql.org/dev/pgloader.1.html#_parallel_loadingsection_threads = N
split_file_reading= True 

IIRC, Simon and Greg Smith asked for pgloader to get those parallel loading
features in order to have some first results and ideas about the performance
gain, as a first step in the parallel COPY backend implementation design.

Hope this helps,
--
dim

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Re: win32 build problem (cvs, msvc 2005 express)
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: An idea for parallelizing COPY within one backend