Re: Practical impediment to supporting multiple SSL libraries

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: Practical impediment to supporting multiple SSL libraries
Дата
Msg-id 20060414184215.GA4474@ns.snowman.net
обсуждение исходный текст
Ответ на Re: Practical impediment to supporting multiple SSL libraries  (Greg Stark <gsstark@mit.edu>)
Ответы Re: Practical impediment to supporting multiple SSL libraries  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
* Greg Stark (gsstark@mit.edu) wrote:
> Stephen Frost <sfrost@snowman.net> writes:
> > Another thought along these lines:  Perhaps a 'PQgettuple' which can be
> > used to process one tuple at a time.  This would be used in an ASYNC
> > fashion and libpq just wouldn't read/accept more than a tuple's worth
> > each time, which it could do into a fixed area (in general, for a
> > variable-length field it could default to an initial size and then only
> > grow it when necessary, and grow it larger than the current request by
> > some amount to hopefully avoid more malloc/reallocs later).
>
> I know DBD::Oracle uses an interface somewhat like this but more
> sophisticated. It provides a buffer and Oracle fills it with as many records
> as it can.

The API I suggested originally did this, actually.  I'm not sure if it
would be used in these cases though which is why I was backing away from
it a bit.  I think it's great if you're grabbing alot of data but these
seem to be cases when you're not.  Then again, that's probably because
of the kind of things I was looking at (you don't generally see large
data-analysis tools in a distribution like Debian simply because those
tools are usually specialized to a given data set, as is actually the
case with some tools we use here at my work which make use of the Oracle
buffer system and I'd love to move to something similar for
Postgres...).

> It's blocking though (by default) and DBD::Oracle tries to adjust the size of
> the buffer to keep the network pipeline full, but if the application is slow
> at reading the data then the network buffers fill and it pushes back to the
> database which blocks writing.

It could be done as blocking or non-blocking and could be an option in
the API, really.  I do prefer the idea that if the application is slow
at reading the data then it pushes back to the database to block
writing.  I also *really* prefer to minimize the amount of memory used
by libraries...  I've never felt it's appropriate for libpq to allocate
huge amount of memory in response to a large query. :/  I know this can
be worked around using cursors but I still feel it's a terrible thing
for a library to do.

> This is normally a good thing though. One of the main problems with the
> current libpq interface is that if you have a very large result set it flows
> in as fast as it can and the library buffers it *all*. If you're trying to
> avoid forcing the user to eat millions of records at once you don't want to be
> buffering them anywhere all at once. You want a constant pipeline of records
> streaming out as fast as they can be processed and no faster.

Right...  As I mentioned, the application can use cursors to
*work-around* this foolishness in libpq but that doesn't really make it
any less silly.
Thanks!
    Stephen

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: Practical impediment to supporting multiple SSL libraries
Следующее
От: Mark Dilger
Дата:
Сообщение: Re: two-argument aggregates and SQL 2003