Re: Limits on PostgreSQL

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Limits on PostgreSQL
Дата
Msg-id 536.959295676@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Limits on PostgreSQL  (Marcos Barreto de Castro <mbdecastro@yahoo.com>)
Список pgsql-general
Marcos Barreto de Castro <mbdecastro@yahoo.com> writes:
> 1 - How many tuples can be returned as a result of a
> query using a CURSOR? (Is it possible to do a SELECT
> * on a table that has 2 million records and OPEN a
> CURSOR for that SELECT and show all records'contents
> using FETCH FORWARD, for example?)

You probably wouldn't want to fetch all 2 million rows in one FETCH,
because you'd risk running out of memory on the client side.  But as
long as you grab a reasonable number of rows per FETCH command, it works
fine.  This is in fact the recommended method for dealing with extremely
large SELECT results.

>  2 - When one uses a CURSOR for a SELECT is there a
> big memory consumption or there is a memory buffer
> limit and beyond that the result is written to a file
> (virtual memory) and read from there when needed?

Cursors work just the same as plain evaluation of the query would,
except that the query execution is suspended after having fetched the
requested number of tuples.  There isn't any special memory requirement
on the backend side.

Some types of queries need temporary files (an explicit sort of a large
volume of data is an example).  But that'll be just the same whether you
run them via a cursor or a plain SELECT.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: problem with NOTICE: _outNode: don't know how to print type
Следующее
От: Marcin Inkielman
Дата:
Сообщение: PG 7.0 vacuum problem