Обсуждение: Array dimension lost if dimension length is zero
An Npgsql user has reported an edge case use which seems to be a bug. When sending a multidimensional array to PostgreSQL, if the length of a dimension is 0, that dimension is lost. That is, the user sends a 2-dimensional array with 0 for both dimensions; trying to read this back will return a 1-dimensional array. It seems that another way to see this problem is this... Running "select ARRAY[[1], [1]]::integer[]" returns "{{1},{1}}". However, running "select ARRAY[[], []]::integer[]" returns "{}". It seems important for the dimensionality of the array to be what was sent regardless of the lengths; otherwise problems occur as the user expect back a 2-dimensional array but receive a single-dimensional one... For reference, the original issue is https://github.com/npgsql/npgsql/issues/1271
Shay Rojansky <roji@roji.org> writes: > An Npgsql user has reported an edge case use which seems to be a bug. > When sending a multidimensional array to PostgreSQL, if the length of a > dimension is 0, that dimension is lost. That is, the user sends a > 2-dimensional array with 0 for both dimensions; trying to read this back > will return a 1-dimensional array. This is intentional: all empty arrays are alike, with zero dimensions (not one dimension, but none). Probably a lot of these decisions would be made differently if we were starting over in a green field; but since we're not, I'm disinclined to touch it. regards, tom lane
Ah, I see. And changing this in a major version is out of the question? It makes dealing with arrays more complicated than it should be on the client side etc. On Thu, Sep 8, 2016 at 5:32 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Shay Rojansky <roji@roji.org> writes: > > An Npgsql user has reported an edge case use which seems to be a bug. > > When sending a multidimensional array to PostgreSQL, if the length of a > > dimension is 0, that dimension is lost. That is, the user sends a > > 2-dimensional array with 0 for both dimensions; trying to read this back > > will return a 1-dimensional array. > > This is intentional: all empty arrays are alike, with zero dimensions > (not one dimension, but none). > > Probably a lot of these decisions would be made differently if we were > starting over in a green field; but since we're not, I'm disinclined > to touch it. > > regards, tom lane >
> > This is intentional: all empty arrays are alike, with zero dimensions > (not one dimension, but none). > Also, note that when reading an empty array in binary encoding, the dimension field contains one.
>>>>> "Shay" == Shay Rojansky <roji@roji.org> writes: >> This is intentional: all empty arrays are alike, with zero >> dimensions (not one dimension, but none). Shay> Also, note that when reading an empty array in binary encoding, Shay> the dimension field contains one. Nope: postgres=# select array_send('{}'::integer[]); array_send ---------------------------- \x000000000000000000000017 (that's ndim=00000000, flags=00000000, element_oid=00000017) -- Andrew (irc:RhodiumToad)
Apologies, I misinterpreted the results. I still think it may make sense to change this behavior in a major version release, although I don't know what the PostgreSQL policy is on such breakage. On Fri, Sep 9, 2016 at 7:13 AM, Andrew Gierth <andrew@tao11.riddles.org.uk> wrote: > >>>>> "Shay" == Shay Rojansky <roji@roji.org> writes: > > >> This is intentional: all empty arrays are alike, with zero > >> dimensions (not one dimension, but none). > > Shay> Also, note that when reading an empty array in binary encoding, > Shay> the dimension field contains one. > > Nope: > > postgres=# select array_send('{}'::integer[]); > array_send > ---------------------------- > \x000000000000000000000017 > > (that's ndim=00000000, flags=00000000, element_oid=00000017) > > -- > Andrew (irc:RhodiumToad) >
Shay Rojansky <roji@roji.org> writes: > I still think it may make sense to change this behavior in a major version > release, although I don't know what the PostgreSQL policy is on such > breakage. I think there's basically no chance of that happening. It would break an enormous amount of code that currently isn't broken, to change to a definition that's only arguably better than what we have. regards, tom lane