On 08/11/20 23:21, Daniele Varrazzo wrote:
[snip]
> 1. If we specify `numeric` or `int8` as oid, inserting in an int field
> in a table will work ok, but some functions/operators won't (e.g. "1
>>> %s").
> 2. If we specify `int4` it would work for those few functions defined
> as `integer`, but if we try to write a number that doesn't fit in 32
> bits into a Postgres bigint field I assume something will overflow
> along the way, even if both python and postgres can handle it.
> 3. If we specify `unknown` it might work more often, but
> `cursor.execute("select %s", [10]) will return the string "10" instead
> of a number.
>
> So I wonder what's the best compromise to do here: the less bad seems
> 1. 3. might work in more contexts, but it's a very counterintuitive
> behaviour, and roundtripping other objects (dates, uuid) works no
> problem: they don't come back as strings.
Looking at what the adapters in other languages/frameworks do the common
solution is to choose the "best fitting" type and let the programmer add
a cast when needed. This is easier in statically typed languages where
we have an almost perfect overlap between PostgreSQL and platform types
but a bit more difficult in dynamic typed languages like Python where
the available types are abstracted over the platform ones (numbers are a
good example).
In your example I'd just go for int8 (the largest possible int in
PostgreSQL). Decimal would probably be better (largest range) but it is
not what the majority of people would expect. IMHO, oid is a bad idea
because it has a very specific semantic and the error messages generated
by PostgreSQL will be more confusing.
federico