On Thu, Apr 03, 2008 at 09:38:42PM -0400, Tom Lane wrote:
> Sam Mason <sam@samason.me.uk> writes:
> > On Thu, Apr 03, 2008 at 03:57:38PM -0400, Tom Lane wrote:
> >> I liked the idea of allowing COPY FROM to act as a table source in a
> >> larger SELECT or INSERT...SELECT. Not at all sure what would be
> >> involved to implement that, but it seems a lot more flexible than
> >> any other approach.
>
> > I'm not sure why new syntax is needed, what's wrong with having a simple
> > set of procedures like:
> > readtsv(filename TEXT) AS SETOF RECORD
>
> Yeah, I was thinking about that too. The main stumbling block is that
> you need to somehow expose all of COPY's options for parsing an input
> line (CSV vs default mode, quote and delimiter characters, etc).
Guess why I chose a nice simple example!
> It's surely doable but it might be pretty ugly compared to bespoke
> syntax.
Yes, that's an easy way to get it looking pretty.
As an alternative solution, how about having some datatype that stores
these parameters. E.g:
CREATE TYPE copyoptions ( delimiter TEXT CHECK (delimiter <> ""), nullstr TEXT, hasheader BOOLEAN, quote
TEXT, escape TEXT );
And have the input_function understand the current PG syntax for COPY
options. You'd then be able to do:
copyfrom('dummy.csv',$$ DELIMITER ';' CSV HEADER $$)
And the procedure would be able to pull out what it wanted from the
options.
> Another thing is that nodeFunctionScan.c is not really designed for
> enormous function result sets --- it dumps the results into a tuplestore
> whether that's needed or not. This is a performance bug that we ought
> to address anyway, but we'd really have to fix it if we want to approach
> the COPY problem this way. Just sayin'.
So you'd end up with something resembling a coroutine? When would it
be good to actually dump everything into a tuplestore as it does at the
moment?
It'll be fun to see how much code breaks because it relies on the
current behaviour of a SRF running to completion without other activity
happening between!
Sam