Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results

Поиск
Список
Период
Сортировка
От The Hermit Hacker
Тема Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results
Дата
Msg-id Pine.NEB.3.96.980106013700.235M-100000@thelab.hub.org
обсуждение исходный текст
Ответ на I want to change libpq and libpgtcl for better handling of large query results  (Constantin Teodorescu <teo@flex.ro>)
Ответы Re: [HACKERS] I want to change libpq and libpgtcl for better handling of large query results  (PostgreSQL <postgres@deuroconsult.ro>)
Список pgsql-hackers
On Mon, 5 Jan 1998, Constantin Teodorescu wrote:

> In order to do this, the connection is 'cloned' and on this new
> connection the query is issued. For every record fetched, the C callback
> function is called, here the Tcl interpreted is invoked for the source
> inside the loop, then memory used by the record is release and the next
> record is ready to come.
> More than that, after processing some records, user can choose to break
> the loop (using break command in Tcl) that is actually breaking the
> connection.
>
> What we achieve making this patches ?
>
> First of all the ability of sequential processing large tables.
> Then increasing performance due to parallel execution of receiving data
> on the network and local processing. The backend process on the server
> is filling the communication channel with data and the local task is
> processing it as it comes.
> In the old version, the local task has to wait until *all* data has
> comed (buffered in memory if it was room enough) and then processing it.
>
> What I would ask from you?
> 1) First of all, if my needs could be satisfied in other way with
> current functions in libpq of libpgtcl. I can assure you that with
> current libpgtcl is rather impossible. I am not sure if there is another
> mechanism using some subtle functions that I didn't know about them.

    Bruce answered this one by asking about cursors...

> 2) Then, if you agree with the idea, to whom we must send more accurate
> the changes that we would like to make in order to be analysed and
> checked for further development of Pg.

    Here, on this mailing list...

    Now, let's see if I understand what you are thinking of...

    Basically, by "cloning", you are effectively looking at implementing ftp's
way of dealing with a connection, having one "control" channel, and one "data"
channel, is this right?  So that the "frontend" has a means of sending a STOP
command to the backend even while the backend is still sending the frontend
the data?

    Now, from reading Bruce's email before reading this, this doesn't get
around the fact that the backend is still going to have to finish generating
a response to the query before it can send *any* data back, so, as Bruce has
asked, don't cursors already provide what you are looking for?  With cursors,
as I understand it, you basically tell the backend to send forward X tuples at
a time and after that, if you want to break the connection, you just break
the connection.

    With what you are proposing (again, if I'm understanding correctly), the
frontend would effectively accept X bytes of data (or X tuples) and then it
would have an opportunity to send back a STOP over a control channel...

    Oversimplified, I know, but I'm a simple man *grin*

Marc G. Fournier
Systems Administrator @ hub.org
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: [HACKERS] subselect
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: [HACKERS] subselect