Re: Large selects handled inefficiently?

Поиск
Список
Период
Сортировка
От Chris
Тема Re: Large selects handled inefficiently?
Дата
Msg-id 39ADDEDE.3BFB4EB7@nimrod.itg.telecom.com.au
обсуждение исходный текст
Ответ на Large selects handled inefficiently?  (Jules Bean <jules@jellybean.co.uk>)
Ответы Re: Large selects handled inefficiently?  (Jules Bean <jules@jellybean.co.uk>)
Список pgsql-general
Jules Bean wrote:
>
> On Thu, Aug 31, 2000 at 12:22:36AM +1000, Andrew Snow wrote:
> >
> > > I believe I can work around this problem using cursors (although I
> > > don't know how well DBD::Pg copes with cursors).  However, that
> > > doesn't seem right -- cursors should be needed to fetch a large query
> > > without having it all in memory at once...
> >
> > Actually, I think thats why cursors were invented in the first place ;-)  A
> > cursor is what you are using if you're not fetching all the results of a
> > query.
>
> I really can't agree with you there.
>
> A cursor is another slightly foolish SQL hack.

Not quite, but it is true that this is a flaw in postgres. It has been
discussed on hackers from time to time about implementing a "streaming"
interface. This means that the client doesn't absorb all the results
before allowing access to the results. You can start processing results
as and when they become available by blocking in the client. The main
changes would be to the libpq client library, but there would be also
other issues to address like what happens if an error happens half way
through. In short, I'm sure this will be fixed at some stage, but for
now cursors is the only real answer.

В списке pgsql-general по дате отправления:

Предыдущее
От: "Travis Bauer"
Дата:
Сообщение: Error with tcp/ip networking
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Error with tcp/ip networking