Abhijit Menon-Sen wrote:
>I've been working on making it possible for PL/Perl users to fetch large
>result sets one row at a time (the current spi_exec_query interface just
>returns a big hash).
>
>
>The idea is to have spi_query call SPI_prepare/SPI_open_cursor, and have
>an spi_fetchrow that calls SPI_cursor_fetch. It works well enough, but I
>don't know how to reproduce spi_exec_query's error handling (it runs the
>SPI_execute in a subtransaction).
>
>To do something similar, I would have to create a WITH HOLD cursor in my
>spi_query function. But SPI_cursor_open provides no way to do this, and
>it calls PortalStart before I can set CURSOR_OPT_HOLD myself.
>
and later:
>One possibility would be to make plperl_call_handler create the internal
>subtransaction, so that all of the perl code runs inside it. But I'm not
>sure if that would actually work, especially if one of the SPI functions
>failed. But I can't think of what else to do, either.
>
>
>
>
This is an important piece of work in making plperl really usable.
Is it possible to do by using non-SPI calls?
Is is possible to do it without using a cursor, e.g. run the query all
at once and store the data in a TupleStore (rather like you did for
plperl return_next) and then hand the rows to plperl one at a time on
demand (in effect a sort of homegrown cursor)? Could something like that
be done in a PG_TRY block?
I'm just thinking off the top of my head here because I don't know the
answer and I'm hoping some kindly wizard will speak up and set us both
straight :-)
cheers
andrew