Producer/Consumer Issues in the COPY across network

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Producer/Consumer Issues in the COPY across network
Дата
Msg-id 1204023633.4252.225.camel@ebony.site
обсуждение исходный текст
Ответы Re: Producer/Consumer Issues in the COPY across network
Список pgsql-hackers
I'm looking at ways to reduce the number of network calls and/or the
waiting time while we perform network COPY.

The COPY calls in libpq allow asynchronous actions, yet are coded in a
synchronous manner in pg_dump, Slony and psql \copy.

Does anybody have any experience with running COPY in asynchronous
mode? 

When we're running a COPY over a high latency link then network time is
going to become dominant, so potentially, running COPY asynchronously
might help performance for loads or initial Slony configuration. This is
potentially more important on Slony where we do both a PQgetCopyData()
and PQputCopyData() in a tight loop.

I also note that PQgetCopyData always returns just one row. Is there an
underlying buffering between the protocol (which always sends one
message per row) and libpq (which is one call per row)? It seems
possible for us to request a number of rows from the server up to a
preferred total transfer size.

PQputCopyData seems to be more efficient with smaller rows.

Ideas? Experience?

--  Simon Riggs 2ndQuadrant  http://www.2ndQuadrant.com 



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: [COMMITTERS] pgsql: Link postgres from all object files at once, to avoid the
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: pg_dump additional options for performance