On Wed, Feb 01, 2006 at 09:47:12AM +1100, James Harper wrote:
> > Be careful what you infer from such a scan: not finding any NULLs
> > doesn't necessarily mean a column isn't nullable, it just means the
> > result set didn't contain any NULLs.
>
> I understand that limitation, but haven't figured out if it matters in
> my situation. The only time it might is if the client wants to infer a
> schema as a result of a query, eg 'SELECT * FROM TableX WHERE 0 = 1'.
Even if such a query did return a "nullable" flag, plenty of other
metadata would be absent that might be just as interesting from a
schema-viewing standpoint (CHECK, PRIMARY KEY, etc.). A better way
to view the schema is to query the system catalogs or the Information
Schema.
> In the above example, does the database engine assign internally a
> 'nullability' flag? I guess it must do... because how would the
> following be evaluated:
>
> SELECT f1 + f2 AS f INTO TableY FROM TableX WHERE f1 < 30
>
> Would the column f in the created table be nullable or not?
>
> I guess I need to do some testing unless you know off the top of your
> head?
I'm not familiar enough with PostgreSQL internals to comment on
what's happening underneath, but I could tell you from experience
what the above query would do. But with a quick test you could
figure it out for yourself :-)
> Hmmm... so a select statement with result set of a million rows is going
> to stall for a while before the results are usefully available to the
> client, and is then going to use a significant amount of memory on the
> client...
Correct.
> Is this a limitation of libpq or of the underlying database engine?
The "Incremental results from libpq" thread from a few months ago
might answer your questions:
http://archives.postgresql.org/pgsql-interfaces/2005-11/msg00010.php
> Are there any alternative (but native - eg not ODBC) interfaces to
> postgresql?
What problem do you want the alternative to solve? If it's just
the libpq-fetches-all-rows problem then you could use a cursor.
--
Michael Fuhr