Pipelining INSERTs using libpq
| От | Florian Weimer |
|---|---|
| Тема | Pipelining INSERTs using libpq |
| Дата | |
| Msg-id | 50D43A7D.1030406@redhat.com обсуждение исходный текст |
| Ответы |
Re: Pipelining INSERTs using libpq
|
| Список | pgsql-general |
I would like to pipeline INSERT statements. The idea is to avoid waiting for server round trips if the INSERT has no RETURNING clause and runs in a transaction. In my case, the failure of an individual INSERT is not particularly interesting (it's a "can't happen" scenario, more or less). I believe this is how the X toolkit avoided network latency issues. I wonder what's the best way to pipeline requests to the server using the libpq API. Historically, I have used COPY FROM STDIN instead, but that requires (double) encoding and some client-side buffering plus heuristics if multiple tables are filled. It does not seem possible to use the asynchronous APIs for this purpose, or am I missing something? -- Florian Weimer / Red Hat Product Security Team
В списке pgsql-general по дате отправления: