Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results

Поиск
Список
Период
Сортировка
От Constantin Teodorescu
Тема Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results
Дата
Msg-id 34B3396D.50488A88@flex.ro
обсуждение исходный текст
Ответ на Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results  (Peter T Mount <psqlhack@maidast.demon.co.uk>)
Список pgsql-hackers
Peter T Mount wrote:
>
> The only solution I was able to give was for them to use cursors, and
> fetch the result in chunks.

Got it!!!

Seems everyone has 'voted' for using cursors.

As a matter of fact, I have tested both a
BEGIN ; DECLARE CURSOR ; FETCH N; END;
and a
SELECT FROM

Both of them are locking for write the tables that they use, until end
of processing.

Fetching records in chunks (100) would speed up a little the processing.

But I am still convinced that if frontend would be able to process
tuples as soon as they come, the overall time of processing a big table
would be less.
Fetching in chunks, the frontend waits for the 100 records to come (time
A) and then process them (time B). A and B cannot be overlapped.

Thanks a lot for helping me to decide. Reports in PgAccess will use
cursors.

--
Constantin Teodorescu
FLEX Consulting Braila, ROMANIA

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: fix for views and outnodes
Следующее
От: Keith Parks
Дата:
Сообщение: Re: consttraints.source