Обсуждение: ERROR: failed to find conversion function from key_vals_nn to record[]

Поиск
Список
Период
Сортировка

ERROR: failed to find conversion function from key_vals_nn to record[]

От
Bryn Llewellyn
Дата:
I’ve copied a self-contained testcase below. Is the error that the "as intended" test causes due to a known limitation—or even a semantic dilemma that I'm failing to spot? Or might it be due to a bug?

If you're interested, the testcase rests on this thinking:

Many SQL experts claim that the RDBMS pioneers made a mistake when they made data types nullable by default—and they argue that the optional modifier should have been *nullable*. Anyway, the philosophy is clear:

«
Nulls bring all sorts of risks to the programmer that are trivially avoided in most cases where you don’t anyway want nulls.
»

However, it’s a pain to have to remember to write "not null" in a gazillion places. And the user-defined domain (conspicuous by its absence in Oracle Database) is the perfect device to impose your intentions from a single point of maintenance.

I've gone to the max. with the "nn" domain approach in my testcase. It uses a composite type thus:

  type key_val as (k text_nn, v text_nn);

(At least the "text_nn" idea doesn't cause problems.)

It uses the "any()" array function to test if a given "key_val" value is found in an array of such values.

The error that occurs when I write what I mean, using a "key_val_nn" value and a "key_vals_nn" value.

I can work around the problem by typecasting my values back to their base "key_val" and "key_val[]" values by hand.

So I'm surprised that PG can't manage this typecasting for itself.

——————————————————————————————

create domain text_nn as text not null;
create type key_val as (k text_nn, v text_nn);

create domain key_val_nn  as key_val not null;
create domain key_vals_nn as key_val_nn[] not null;

create function f(which in text)
  returns text
  language plpgsql
as $body$
declare
  -- Use the raw composite type.
  kv1     constant key_val     := ('a', 'b');
  kv2     constant key_val     := ('a', 'b');
  kvs     constant key_val[]   := array[kv1, kv2];

  -- Use the domains that bring null-safety.
  kv1_nn  constant key_val_nn  := ('a', 'b');
  kvs_nn  constant key_vals_nn := array[kv1, kv2];

  -- Typecast the null-safe values back to the raw composite type.
  kv1_0   constant key_val     := kv1_nn;
  kvs_0   constant key_val[]   := kvs_nn;
begin
  case which
    when 'without NOT NULL safety' then
      return (kv1 = any(kvs));
    when 'as intended' then
      return (kv1_nn = any(kvs_nn));
    when 'workaround' then
      return (kv1_0 = any(kvs_0));
  end case;
end;
$body$;

select f('without NOT NULL safety');
select f('workaround');

/*
  This one cases the error, thus:

  ERROR:  failed to find conversion function from key_vals_nn to record[]
  CONTEXT:  SQL expression "(kv1_nn = any(kvs_nn))"
*/;
select f('as intended');


Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
"David G. Johnston"
Дата:
On Wednesday, June 15, 2022, Bryn Llewellyn <bryn@yugabyte.com> wrote:
I’ve copied a self-contained testcase below. Is the error that the "as intended" test causes due to a known limitation—or even a semantic dilemma that I'm failing to spot? Or might it be due to a bug?

I read the note in create domain as basically “don’t do this” (the not null part) but the issue you are pointing out seems unrelated to that. 


/*
  This one cases the error, thus:

  ERROR:  failed to find conversion function from key_vals_nn to record[]
  CONTEXT:  SQL expression "(kv1_nn = any(kvs_nn))"
*/;
select f('as intended');


The fact that a domain over an array isn’t being seen as an array here seems like a bug.  POLA violation at least, and I don’t recall any notes regarding this dynamic in the docs.

However, a more trivial case does work, at least in HEAD:

postgres=# create domain mytext as text[] not null;
CREATE DOMAIN
postgres=# select '1' = any(array['1','2']::mytext);
 ?column?
----------
 t
(1 row)

However, as you show:

postgres=# create type kv AS ( key text, val text );
CREATE TYPE

postgres=# create domain kvarr as kv[];
CREATE DOMAIN

postgres=# select ('1','one')::kv = any (array[('1','one')::kv]);
 ?column?
----------
 t
(1 row)

postgres=# select ('1','one')::kv = any ((array[('1','one')::kv])::kvarr);
ERROR:  failed to find conversion function from kvarr to record[]

So the interaction of a composite type and the domain over array seems to be the scope of the issue - which makes me thing bug even more.

David J.

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
Bryn Llewellyn
Дата:
david.g.johnston@gmail.com wrote:


I’ve copied a self-contained testcase below. Is the error that the "as intended" test causes due to a known limitation—or even a semantic dilemma that I'm failing to spot? Or might it be due to a bug?

I read the note in create domain as basically “don’t do this” (the not null part) but the issue you are pointing out seems unrelated to that. 

/*
  This one cases the error, thus:

  ERROR:  failed to find conversion function from key_vals_nn to record[]
  CONTEXT:  SQL expression "(kv1_nn = any(kvs_nn))"
*/;
select f('as intended');

The fact that a domain over an array isn’t being seen as an array here seems like a bug.  POLA violation at least, and I don’t recall any notes regarding this dynamic in the docs.

However, a more trivial case does work, at least in HEAD:

create domain mytext as text[] not null;
select '1' = any(array['1','2']::mytext);
 ?column?
----------
 t

However, as you show:

create type kv AS ( key text, val text );
create domain kvarr as kv[];
select ('1','one')::kv = any (array[('1','one')::kv]);
 ?column?
----------
 t
(1 row)

select ('1','one')::kv = any ((array[('1','one')::kv])::kvarr);
ERROR:  failed to find conversion function from kvarr to record[]

So the interaction of a composite type and the domain over array seems to be the scope of the issue - which makes me thing bug even more.

Thanks for the quick response, David. I'll use my workaround for now. And I'll look out for news about a possible fix.

About this from the "create domain" doc:

«
Best practice therefore is to design a domain's constraints so that a null value is allowed, and then to apply column NOT NULL constraints to columns of the domain type as needed, rather than directly to the domain type.
»

What an enormous disappointment. It defeats a large part of what I wanted to adopt as a practice. As I understand that “don’t do this” caution, and the discussion that surrounds it, the advice applies only to the case that a domain with a not null constraint is used as the data type of a column in a table. I tried this variant on what the doc has:

create domain text_nn as text not null;
create table t1(k int primary key, v text not null);
insert into t1(k, v) values(1, 'x');

-- Causes:
-- null value in column "v" of relation "t1" violates not-null constraint
insert into t1(k, v) values(2, (select 'y'::text_nn where false));

Right, the subquery evaluates to "null". Then failure comes, as is intended, when the attempt is made to assign "null" to "t.v" in the to-be-inserted row.

Then I repeated the test like this:

create table t2(k int primary key, v text_nn);
insert into t2(k, v) values(1, 'x');
insert into t2(k, v) values(2, (select 'y'::text_nn where false));

\pset null '<null>'
select
  k,
  v,
  pg_typeof(v) as "pg_typeof(v)"
from t2;

No error—and this result:

 k |   v    | pg_typeof(v) 
---+--------+--------------
 1 | x      | text_nn
 2 | <null> | text_nn

This is what the doc promises. But how can you see it as anything but a bug? The subquery evaluates to "null", and only then is the attempt made to create a new row which self-evidently violates the domain's constraint. How is it any different from this:

insert into t2(k, v) values(1, null);

This obligingly causes "domain text_nn does not allow null values".







Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
"David G. Johnston"
Дата:
On Wed, Jun 15, 2022 at 11:07 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:
This is what the doc promises. But how can you see it as anything but a bug? The subquery evaluates to "null", and only then is the attempt made to create a new row which self-evidently violates the domain's constraint. How is it any different from this:

Because NULL is doing double-duty.  The absence of any possible value is represented by null, as is knowing that a value exists but not knowing what that value is.  The most obvious example of this problem is:
(not tested, may have typos)

SELECT a.vala, b.valb, b.lbl
FROM (values (1)) AS a (vala)
LEFT JOIN (values (2, null)) AS b (valb, lbl) ON vala = valb;

vs.

SELECT a.vala, b.valb, b.lbl
FROM (values (1)) AS a (vala)
LEFT JOIN (values (1, null)) AS b (valb, lbl) ON vala = valb;

The resultant value of "b.lbl" is null in both queries, but not for the same reason (left join missing semantics pre-empt null type value semantics).

So, yes, queries can produce NULL even for domains that are defined not null.  If you want protection from null values in your database tables you must define your columns to not accept nulls.  It becomes a habit and can be readily checked for in the catalogs (I suggest adding a column comment for why columns defined as null are allowed to be null, then you can query the column contents to exclude those columns where null was intended - or go with your idea and just report every column as non-conforming. COMMENT ON table.column IS '@NULLABLE - optional information the customer might not provide').

David J.

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
Tom Lane
Дата:
"David G. Johnston" <david.g.johnston@gmail.com> writes:
> The fact that a domain over an array isn’t being seen as an array here
> seems like a bug.

Hmm.  The attached quick-hack patch seems to make this better, but
I'm not sure whether there are any cases it makes worse.

            regards, tom lane

diff --git a/src/backend/parser/parse_oper.c b/src/backend/parser/parse_oper.c
index cf64afbd85..f36b23092d 100644
--- a/src/backend/parser/parse_oper.c
+++ b/src/backend/parser/parse_oper.c
@@ -867,9 +867,11 @@ make_scalar_array_op(ParseState *pstate, List *opname,
      * Now switch back to the array type on the right, arranging for any
      * needed cast to be applied.  Beware of polymorphic operators here;
      * enforce_generic_type_consistency may or may not have replaced a
-     * polymorphic type with a real one.
+     * polymorphic type with a real one.  RECORD acts like a polymorphic type
+     * for this purpose, too.
      */
-    if (IsPolymorphicType(declared_arg_types[1]))
+    if (IsPolymorphicType(declared_arg_types[1]) ||
+        declared_arg_types[1] == RECORDOID)
     {
         /* assume the actual array type is OK */
         res_atypeId = atypeId;

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
Bryn Llewellyn
Дата:
david.g.johnston@gmail.com wrote:


This is what the doc promises. But how can you see it as anything but a bug? The subquery evaluates to "null", and only then is the attempt made to create a new row which self-evidently violates the domain's constraint. How is it any different from this:

Because NULL is doing double-duty. The absence of any possible value is represented by null, as is knowing that a value exists but not knowing what that value is. The most obvious example of this problem is...

...The resultant value of "b.lbl" is null in both queries, but not for the same reason (left join missing semantics pre-empt null type value semantics).

So, yes, queries can produce NULL even for domains that are defined not null.

If you want protection from null values in your database tables you must define your columns to not accept nulls. It becomes a habit and can be readily checked for in the catalogs (I suggest adding a column comment for why columns defined as null are allowed to be null, then you can query the column contents to exclude those columns where null was intended - or go with your idea and just report every column as non-conforming. COMMENT ON table.column IS '@NULLABLE - optional information the customer might not provide').

First off, thank you very much for the tip to use "comment on" together with catalog queries to police a practice policy. Meanwhile, I've noted the unsurprising fact that you can write a "not null" constraint in the "create table" statement that defines a column's data type using a domain that, too, has such a constraint.

Maybe a slightly more lightweight practice could be to use domains like "text_nn" and "text_uc" (for unconstrained text) and then in "create table" statements and similar add the explicit constraint for the "_nn" case. Using "text_uc" and similar would support searching the catalog, too. And both approaches are vulnerable to ordinary human error that would be very hard to detect. Typing an explanation in any kind of comment, including external prose doc, always has this character.

Back to NULLs...

Your code examples ran without error and produced the results that you described. I do understand the fact that, on its face,  the NULLs in the two cases arise for different reasons. But this (still) seems to me to be a distinction without a difference. It rather reminds me of my earlier discussion with you (all) about the distinction (in the world of JSON using "jsonb") between the presence of an object key "k" with the value "JSON null" and the absence of key "k".

The semantic proposition behind the "outer join", as it seems to me, is inextricably bound up with the notion that, in the resulting rows, one table might not have a partner row with the other. (It doesn't matter here which table lacks the partner or if you decide to spell your query so that "right" is the appropriate choice or "left" is—as long as you spell the whole thing correctly to express your intention.) And the "outer join" semantics bring the notion that you simply have no information about the facts that, were it present, the missing row might have given you. Whoever it was on the Committee back in the day, decided in concert to represent this "no information" outcome, in the relation that results for an "outer join", as an "emergent" SQL NULL.

I've talked endlessly about NULL, over the years and face-to-face, with colleagues whose reasoning ability and lucidity I hugely respect. They are unwavering in how they explain NULL. It says simply: "I have absolutely no information about the value that I sought to interrogate." And these experts argue that there are no flavors of the bare fact of having no information. They argue, too, that to say "the value of this variable (or row-column intersection) is NULL" is an oxymoron because the absence of information is not a value—in the purist sense.

I copied a reasonably realistic example, below, where the "outer join" is between two physical tables, "genres" and "books". They have properly defined PKs. And there's an FK that points in the direction that the table names suggest. Here's the requirement:

«
Each book may be labelled by exactly one (known) genre.
Each genre may label one or several books.
»

It's the canonical list-of-values use case. It's perfectly OK to have a book for whose genre there is simply (as yet) no information available. And it's also perfectly OK to have a genre that, as yet, labels no book.

The PK columns inevitably end up as "not null". And the FK column is deliberately nullable.

I avoided domains altogether, using explicit "not null" constraints, or not, as appropriate to the requirements set out above.

Notwithstanding the proper PK and FK declarations, I cannot stop the "genres" table having a row that has no "books" partner. And this is exactly what I want.

My example creates three "genres" rows. And it creates five "books" rows. Three are labelled by 'genre-1'; one is labeled by 'genre-2'; and one isn't (as yet) labelled at all.

Here's the result from the obvious query to characterize books—defined for easy re-use as the view "book_genre_facts":

 ISBN | Book title |   Genre   
------+------------+-----------
   10 | book-1     | genre-1
   20 | book-2     | genre-1
   30 | book-3     | genre-1
   40 | book-4     | genre-2
   50 | book-5     | «UNKNOWN»

And here's the result from the obvious query to characterize genres—defined for easy re-use as the view "genres_labelling_books":

 Genre # |  Genre  |    Book title    
---------+---------+------------------
       1 | genre-1 | book-1
       1 | genre-1 | book-2
       1 | genre-1 | book-3
       2 | genre-2 | book-4
       3 | genre-3 | «Labels no book»

I can easily produce «UNKNOWN» simply from the fact that the "genres" FK is NULL. And I can produce «Labels no book» because, following proper practice, I'm joining to the primary key of the "books" table. This means that David's "which kind of NULL is it?" isn't the question. Rather, understanding the general semantics of "outer" join, I need only to detect if a book's PK, in the join result, in NULL. (A table without a primary key violoates the rule of proper practice. This is why I found David's terse example too unrealistic to illustrate the issue at hand here.)

I said all this to emphasize that, in a realistic implementation of a realistic use case, it all falls out with no dilemmas.

Finally, this is what I get when I show the data type of a book's PK in the "outer join" result:

 Book PK | Data type of Book PK 
---------+----------------------
      10 | integer
      20 | integer
      30 | integer
      40 | integer
      ~~ | integer

I know that "Book PK" is the projection of a value from a column that has a PK constraint and therefore an implied "not null" constraint. But I'm seeing its projection in the result of an "outer join" that might miss the potential source row altogether. Ergo, I have no information. And it doesn't seem wrong that this is reflected by NULL *in the projection*, even though the source of what is projected is not nullable.

I can emphasize what I said by defining a domain "Not nullable integer" and using this as the data type of the PK of each of my tables. This is done in a heartbeat by making a small change to the code below. This is the result:

 Book PK |  Data type of Book PK  
---------+------------------------
      10 | "Not nullable integer"
      20 | "Not nullable integer"
      30 | "Not nullable integer"
      40 | "Not nullable integer"
      ~~ | "Not nullable integer"

But I can't see that this changes anything about the fact that I'm seeing the projection of a column in an "outer join" rather than the column itself. I might get a problem if I try to write the output of an "outer join" to a table that uses the data types of the columns in the result that I read with "pg_typeof()". But I'd see this as a failure to understand the semantics of "outer join", and a failure to think clearly about NULL, rather than a danger that flows from using, in this example, the domain "Not nullable integer" as the data type of a table column.

Can anybody show me an implementation of a realistic use case that follows proper practice — like "every table must a primary key", "a foreign key must refer to a primary key", and "joins may be made only "on" columns one of which has a PK constraint and the other of which has a FK constraint" — where using a not nullable data type brings a problem that wouldn't occur if the column were defined with a nullable data type and an explicit "not null" constraint?

————————————————————

\pset null '~~'

-- "\d genres" shows "gk" with a "not null" constraint, whether I write it
-- or not. And convention seems to say "don't clutter you code by writing it".
create table genres(
  gk  int   primary key,
  gv  text  not null
  );

insert into genres(gk, gv) values
  (1, 'genre-1'),
  (2, 'genre-2'),
  (3, 'genre-3');

create table books(
  bk  int   primary key,
  bv  text  not null,
  gk  int   references genres(gk) /* NULLABLE by design! */
  );

insert into books(bk, bv, gk) values
  (10, 'book-1', 1),
  (20, 'book-2', 1),
  (30, 'book-3', 1),
  (40, 'book-4', 2),
  (50, 'book-5', NULL)
  ;

create view book_genre_facts("ISBN", "Book title", "Genre") as
select
  b.bk,
  b.bv,
  case
    when (g.gv is not null) then g.gv
    else                         '«UNKNOWN»'
  end
from
  books b
  left outer join
  genres g
  on b.gk = g.gk;

select *
from book_genre_facts
order by 1;

create view genres_labelling_books("Genre #", "Genre", "Book title") as
select
  g.gk,
  g.gv,
  case
    when (b.bk is null) then '«Labels no book»'
    else                     b.bv
  end
from
  genres g
  left outer join
  books b
  on b.gk = g.gk;

select *
from genres_labelling_books
order by 1, 2;

select
  b.bk as "Book PK",
  pg_typeof(b.bk) as "Data type of Book PK"
from
  genres g
  left outer join
  books b
  on b.gk = g.gk
order by 1;

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
"David G. Johnston"
Дата:
On Thu, Jun 16, 2022 at 8:28 PM Bryn Llewellyn <bryn@yugabyte.com> wrote:

Back to NULLs...

Your code examples ran without error and produced the results that you described. I do understand the fact that, on its face,  the NULLs in the two cases arise for different reasons. But this (still) seems to me to be a distinction without a difference. It rather reminds me of my earlier discussion with you (all) about the distinction (in the world of JSON using "jsonb") between the presence of an object key "k" with the value "JSON null" and the absence of key "k".

Hadn't thought of that but indeed I suspect SQL, whether conscious or not, has influenced the design of dealing with JSON in an SQL database to this extent.


The semantic proposition behind the "outer join", as it seems to me, is inextricably bound up with the notion that, in the resulting rows, one table might not have a partner row with the other. (It doesn't matter here which table lacks the partner or if you decide to spell your query so that "right" is the appropriate choice or "left" is—as long as you spell the whole thing correctly to express your intention.) And the "outer join" semantics bring the notion that you simply have no information about the facts that, were it present, the missing row might have given you. Whoever it was on the Committee back in the day, decided in concert to represent this "no information" outcome, in the relation that results for an "outer join", as an "emergent" SQL NULL.

I've talked endlessly about NULL, over the years and face-to-face, with colleagues whose reasoning ability and lucidity I hugely respect. They are unwavering in how they explain NULL. It says simply: "I have absolutely no information about the value that I sought to interrogate." And these experts argue that there are no flavors of the bare fact of having no information. They argue, too, that to say "the value of this variable (or row-column intersection) is NULL" is an oxymoron because the absence of information is not a value—in the purist sense.

At a high-level I would agree.  But those NULLs are introduce by two different processes and sometimes that fact helps to explain reality.

It's like: In what city does John reside in, and in what country is that location.  If I don't know the first I won't know the second, even though I do know what country every city in my database is located within (country is not null, it's just null in reference to this question about John).

IOW, it is quite possible for a design to have had different token for the two cases, which means they differ in some regard.  That it doesn't work that way is because for nearly all cases the difference is immaterial and so separate tokens would be more annoying than helpful.  The model is intentionally papering over reality (i.e., is wrong is some sense) in the interest of being more useful.

 
(A table without a primary key violoates the rule of proper practice. This is why I found David's terse example too unrealistic to illustrate the issue at hand here.)

vala is the PK, valb is the FK.  Table B's PK wasn't relevant.


Can anybody show me an implementation of a realistic use case that follows proper practice — like "every table must a primary key", "a foreign key must refer to a primary key", and "joins may be made only "on" columns one of which has a PK constraint and the other of which has a FK constraint" — where using a not nullable data type brings a problem that wouldn't occur if the column were defined with a nullable data type and an explicit "not null" constraint?


Nothing obvious comes to mind.  But frankly, proper practice includes trying to write idiomatic code for the language you are using so others familiar with the language can learn your code more easily.  You are violating this to an extreme degree.  I do not think it to be a good trade-off.  SQL writers are practical people and the idioms largely avoid any downsides that the arise from SQL not being some paragon of language design.

-- "\d genres" shows "gk" with a "not null" constraint, whether I write it
-- or not. And convention seems to say "don't clutter you code by writing it".
create table genres(
  gk  int   primary key,
  gv  text  not null
  );

"Primary Key" is defined to be the application of both UNIQUE and NOT NULL constraints.  It's not unlike saying "serial" to mean "integer with an associated sequence and default".  But let's not go there, please?

David J.

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
Bryn Llewellyn
Дата:
david.g.johnston@gmail.com wrote:

bryn@yugabyte.com wrote:

Can anybody show me an implementation of a realistic use case that follows proper practice — like "every table must a primary key", "a foreign key must refer to a primary key", and "joins may be made only "on" columns one of which has a PK constraint and the other of which has a FK constraint" — where using a not nullable data type brings a problem that wouldn't occur if the column were defined with a nullable data type and an explicit "not null" constraint?

Nothing obvious comes to mind. But frankly:

proper practice includes trying to write idiomatic code for the language you are using so others familiar with the language can learn your code more easily. You are violating this to an extreme degree.

I do not think it to be a good trade-off. SQL writers are practical people and the idioms largely avoid any downsides that the arise from SQL not being some paragon of language design.

-- "\d genres" shows "gk" with a "not null" constraint, whether I write it
-- or not. And convention seems to say "don't clutter you code by writing it".
create table genres(
  gk  int   primary key,
  gv  text  not null
  );

"Primary Key" is defined to be the application of both UNIQUE and NOT NULL constraints...

Yes, I know what "primary key" implies. I meant only to emphasize that the source column for what the "outer join" projects has a not null constraint, that it doesn't apply to the projection of that column, that this is perfectly understandable, and that this isn't a problem. Never mind.

What part of the code that I showed (the "genres" and "books" use case) violated, to an extreme degree, what you would have wanted me to write—and in what way?

Re: ERROR: failed to find conversion function from key_vals_nn to record[]

От
"Peter J. Holzer"
Дата:
On 2022-06-16 23:58:23 -0700, Bryn Llewellyn wrote:
>     david.g.johnston@gmail.com wrote:
>
>         bryn@yugabyte.com wrote:
>
>         Can anybody show me an implementation of a realistic use case that
>         follows proper practice — like "every table must a primary key", "a
>         foreign key must refer to a primary key", and "joins may be made only
>         "on" columns one of which has a PK constraint and the other of which
>         has a FK constraint" — where using a not nullable data type brings a
>         problem that wouldn't occur if the column were defined with a nullable
>         data type and an explicit "not null" constraint?
[...]
>     -- "\d genres" shows "gk" with a "not null" constraint, whether I write it
>     -- or not. And convention seems to say "don't clutter you code by writing
>     it".
>     create table genres(
>       gk  int   primary key,
>       gv  text  not null
>       );
>
>     "Primary Key" is defined to be the application of both UNIQUE and NOT NULL
>     constraints...
>
> Yes, I know what "primary key" implies. I meant only to emphasize that the
> source column for what the "outer join" projects has a not null constraint,
> that it doesn't apply to the projection of that column, that this is perfectly
> understandable, and that this isn't a problem. Never mind.

But it would be a problem if there was an actual type which wouldn't
include NULL.

The NOT NULL attribute is an attribute of the column, not the type.
When you use the primary key (or any other column marked as NOT NULL)
the type of the result is just the type of that column, the NOT NULL is
dropped.

For example:

hjp=> \d genres
                             Table "public.genres"
╔════════╤═════════╤═══════════╤══════════╤════════════════════════════════════╗
║ Column │  Type   │ Collation │ Nullable │              Default               ║
╟────────┼─────────┼───────────┼──────────┼────────────────────────────────────╢
║ id     │ integer │           │ not null │ nextval('genres_id_seq'::regclass) ║
║ name   │ text    │           │ not null │                                    ║
╚════════╧═════════╧═══════════╧══════════╧════════════════════════════════════╝
Indexes:
    "genres_pkey" PRIMARY KEY, btree (id)
Referenced by:
    TABLE "books" CONSTRAINT "books_genre_fkey" FOREIGN KEY (genre) REFERENCES genres(id)

hjp=> \d books
                             Table "public.books"
╔════════╤═════════╤═══════════╤══════════╤═══════════════════════════════════╗
║ Column │  Type   │ Collation │ Nullable │              Default              ║
╟────────┼─────────┼───────────┼──────────┼───────────────────────────────────╢
║ id     │ integer │           │ not null │ nextval('books_id_seq'::regclass) ║
║ title  │ text    │           │ not null │                                   ║
║ genre  │ integer │           │ not null │                                   ║
╚════════╧═════════╧═══════════╧══════════╧═══════════════════════════════════╝
Indexes:
    "books_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
    "books_genre_fkey" FOREIGN KEY (genre) REFERENCES genres(id)

hjp=> create table books_by_genre as
      select g.name as genre, b.title
      from genres g left join books b on g.id = b.genre;
SELECT 4
Time: 14.046 ms
hjp=> \d books_by_genre
          Table "public.books_by_genre"
╔════════╤══════╤═══════════╤══════════╤═════════╗
║ Column │ Type │ Collation │ Nullable │ Default ║
╟────────┼──────┼───────────┼──────────┼─────────╢
║ genre  │ text │           │          │         ║
║ title  │ text │           │          │         ║
╚════════╧══════╧═══════════╧══════════╧═════════╝

As you can see, the type of the two columns is just "text" not "text not
null". And this is as is should be because the result indeed contains a
NULL value:

hjp=> select * from books_by_genre ;
╔═════════════╤══════════════════════════╗
║    genre    │          title           ║
╟─────────────┼──────────────────────────╢
║ Non-Fiction │ Mastering PostgreSQL 9.6 ║
║ SF          │ Idoru                    ║
║ SF          │ Network Effect           ║
║ Romance     │ (∅)                      ║
╚═════════════╧══════════════════════════╝
(4 rows)

Now, if title actually had a type which didn't include a null value,
this wouldn't be possible. Either the database would have to lie
(declare the column with a type but store a value which is impossible in
that type) or the query would have to fail or the database would have to
figure out a super type which does include that value. Neither
possibility seems very attractive to me (the first one is completely
unacceptable, the second one would be annoying, the third one might be
hard to implement).

        hp

--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Вложения