Re: Large selects handled inefficiently?

Поиск
Список
Период
Сортировка
От Jules Bean
Тема Re: Large selects handled inefficiently?
Дата
Msg-id 20000831110637.E24680@grommit.office.vi.net
обсуждение исходный текст
Ответ на Re: Large selects handled inefficiently?  (Jules Bean <jules@jellybean.co.uk>)
Ответы RE: Large selects handled inefficiently?  ("Hiroshi Inoue" <Inoue@tpf.co.jp>)
Список pgsql-general
On Thu, Aug 31, 2000 at 09:58:34AM +0100, Jules Bean wrote:
> On Thu, Aug 31, 2000 at 03:28:14PM +1100, Chris wrote:
>
> > but it is true that this is a flaw in postgres. It has been
> > discussed on hackers from time to time about implementing a "streaming"
> > interface. This means that the client doesn't absorb all the results
> > before allowing access to the results. You can start processing results
> > as and when they become available by blocking in the client. The main
> > changes would be to the libpq client library, but there would be also
> > other issues to address like what happens if an error happens half way
> > through. In short, I'm sure this will be fixed at some stage, but for
> > now cursors is the only real answer.
>
> Or ...LIMIT...OFFSET, I guess. [As long as I remember to set the
> transaction isolation to serializable.  *sigh*  Why isn't that the
> default?]
>
> I shall investigate whether LIMIT...OFFSET or cursors seems to be
> better for my application.

OK, I'm using cursors (after having checked that they work with
DBD::Pg!). I'm a little confused about transaction isolation levels,
though.  I'm setting the level to 'serializable'  --- this seems
important, since other INSERTS might occur during my SELECT.  However,
the documentation for DECLARE cursor suggests that the 'INSENSITIVE'
keyword is useless, which seems to me to be equivalent to saying that
the transaction level is always SERIALIZABLE?

Jules

В списке pgsql-general по дате отправления:

Предыдущее
От: Ian Turner
Дата:
Сообщение: Re: function
Следующее
От: "Pablo Prieto"
Дата:
Сообщение: Error installing ODBC in NT