Re: [HACKERS] libpq

Поиск
Список
Период
Сортировка
От Chris Bitmead
Тема Re: [HACKERS] libpq
Дата
Msg-id 38A3ADE3.AAE1FC7D@nimrod.itg.telecom.com.au
обсуждение исходный текст
Ответ на libpq  (Chris <chris@bitmead.com>)
Ответы Re: [HACKERS] libpq  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Tom Lane wrote:

> Well, that's true from one point of view, but I think it's just libpq's
> point of view.  The application programmer is fairly likely to have
> specific knowledge of the size of tuple he's fetching, and maybe even
> to have a global perspective that lets him decide he doesn't really
> *want* to deal with retrieved tuples on a packet-by-packet basis.
> Maybe waiting till he's got 100K of data is just right for his app.
> 
> But I can also believe that the app programmer doesn't want to commit to
> a particular tuple size any more than libpq does.  Do you have a better
> proposal for an API that doesn't commit any decisions about how many
> tuples to fetch at once?

If you think applications may like to keep buffered 100k of data, isn't
that an argument for the PGobject interface instead of the PGresult
interface?

I'm trying to think of a situation where you want to buffer data. Let's
say psql has something like "more" inbuilt and it needs to buffer
a screenful, and go forward line by line. Now you want to keep the last
40 tuples buffered. First up you want 40 tuples, then you want one
at a time every time you press Enter.

This seems too much responsibility to press onto libpq, but if the user
has control over destruction of PQobjects they can buffer what they
want, how they want, when they want.


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Chris Bitmead
Дата:
Сообщение: Re: [HACKERS] Solution for LIMIT cost estimation
Следующее
От: Michael Meskes
Дата:
Сообщение: Re: [HACKERS] psql and libpq fixes