Обсуждение: pg9.6 segfault using simple query (related to use fk for join estimates)

Поиск
Список
Период
Сортировка

pg9.6 segfault using simple query (related to use fk for join estimates)

От
Stefan Huehner
Дата:
Hello,
@Tomas put you in CC as it looks like related to work on fk -> join estimates

i did a tiny bit of testing of our software against the nightly postgresql-9.6 debs from apt.postgresql.org

Specifically against:
ii  postgresql-9.6                        9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64
object-relationalSQL database, version 9.6 server 
ii  postgresql-9.6-dbg                    9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64        debug symbols
forpostgresql-9.6 

so autobuilt from last night.

I get postgres consistently to segfault using the following query (trimmed down to shortest example still triggering
thecrash) 

SELECT 1
FROM ad_model_object mo
LEFT JOIN ad_menu m ON mo.ad_process_id = m.ad_process_id
AND mo.action IN ('P', 'R');

With the trigger being a FK definition from ad_menu.ad_process_id to ad_process.ad_process_id.

Dropping that fk makes the crash go away.

See attached files for trimmed down table definition to directly reproduce.

Backtrace ends in:
#0  get_leftop (clause=clause@entry=0x5652932e2d98)
    at
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
#1  0x0000565291ec6ba0 in quals_match_foreign_key (root=0x7fca9b3bcba0, fkrel=0x5652932ab980,
foreignrel=0x5652932e77b8, 
    joinquals=0x7fca9b3bcba0, fkinfo=0x5652932e6ce8)
    at
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3961

so probably related to the 'Use Foreign keys to improve joins estimates' project from Tomas

If you need any more info or testing done just let me know.

Regards,
Stefan

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Michael Paquier
Дата:
On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner <stefan@huehner.org> wrote:
> If you need any more info or testing done just let me know.

The information you are providing is sufficient to reproduce the
problem, thanks! I have added this bug to the list of open items for
9.6.
-- 
Michael



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Julien Rouhaud
Дата:
On 29/04/2016 13:20, Michael Paquier wrote:
> On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner <stefan@huehner.org> wrote:
>> If you need any more info or testing done just let me know.
>
> The information you are providing is sufficient to reproduce the
> problem, thanks! I have added this bug to the list of open items for
> 9.6.
>

The segfault is caused by quals_match_foreign_key() calling get_leftop()
and get_rightop() on a ScalarArrayOpExpr node.

Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
beginning of the struct so the get_*op() work with either (as in
attached patch) fixes the issue.

I'm not sure that assuming this compatibility is the right way to fix
this though.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
> The segfault is caused by quals_match_foreign_key() calling get_leftop()
> and get_rightop() on a ScalarArrayOpExpr node.

> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
> beginning of the struct so the get_*op() work with either (as in
> attached patch) fixes the issue.

> I'm not sure that assuming this compatibility is the right way to fix
> this though.

It certainly isn't.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Julien Rouhaud
Дата:
On 29/04/2016 18:05, Tom Lane wrote:
> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>> and get_rightop() on a ScalarArrayOpExpr node.
>
>> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
>> beginning of the struct so the get_*op() work with either (as in
>> attached patch) fixes the issue.
>
>> I'm not sure that assuming this compatibility is the right way to fix
>> this though.
>
> It certainly isn't.
>

Agreed. New attached patch handles explicitly each node tag.

--
Julien Rouhaud
http://dalibo.com - http://dalibo.org

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
> On 29/04/2016 18:05, Tom Lane wrote:
>> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>>> and get_rightop() on a ScalarArrayOpExpr node.
>>>
>>> I'm not sure that assuming this compatibility is the right way to fix
>>> this though.

>> It certainly isn't.

> Agreed. New attached patch handles explicitly each node tag.

No, this is completely nuts.  The problem is that quals_match_foreign_key
is simply assuming that every clause in the list it's given is an OpExpr,
which is quite wrong.  It should just ignore non-OpExpr quals, since it
cannot do anything useful with them anyway.  There's a comment claiming
that non-OpExpr quals were already rejected:
            * Here since 'usefulquals' only contains bitmap indexes for quals            * of type "var op var" we can
safelyskip checking this.
 

but that doesn't appear to have anything to do with current reality.

While this in itself is about a two-line fix, after reviewing 
137805f89acb3611 I'm pretty unhappy that it got committed at all.
I think this obvious bug is a good sign that it wasn't ready.
Other unfinished aspects like invention of an undocumented GUC
don't leave a good impression either.

Moreover, it looks to me like this will add quite a lot of overhead,
probably far more than is justified, because clauselist_join_selectivity
is basically O(N^2) in the relation-footprint of the current join --- and
not with a real small multiplier, either, as the functions it calls
contain about four levels of nested loops themselves.  Maybe that's
unmeasurable on trivial test cases but it's going to be disastrous in
large queries, or for relations with large numbers of foreign keys.

I think this should be reverted and pushed out to 9.7 development.
It needs a significant amount of rewriting to fix the performance
issue, and now's not the time to be doing that.
           regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Sat, Apr 30, 2016 at 1:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>> On 29/04/2016 18:05, Tom Lane wrote:
>>> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>>>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>>>> and get_rightop() on a ScalarArrayOpExpr node.
>>>>
>>>> I'm not sure that assuming this compatibility is the right way to fix
>>>> this though.
>
>>> It certainly isn't.
>
>> Agreed. New attached patch handles explicitly each node tag.
>
> No, this is completely nuts.  The problem is that quals_match_foreign_key
> is simply assuming that every clause in the list it's given is an OpExpr,
> which is quite wrong.  It should just ignore non-OpExpr quals, since it
> cannot do anything useful with them anyway.  There's a comment claiming
> that non-OpExpr quals were already rejected:
>
>              * Here since 'usefulquals' only contains bitmap indexes for quals
>              * of type "var op var" we can safely skip checking this.
>
> but that doesn't appear to have anything to do with current reality.
>
> While this in itself is about a two-line fix, after reviewing
> 137805f89acb3611 I'm pretty unhappy that it got committed at all.
> I think this obvious bug is a good sign that it wasn't ready.
> Other unfinished aspects like invention of an undocumented GUC
> don't leave a good impression either.
>
> Moreover, it looks to me like this will add quite a lot of overhead,
> probably far more than is justified, because clauselist_join_selectivity
> is basically O(N^2) in the relation-footprint of the current join --- and
> not with a real small multiplier, either, as the functions it calls
> contain about four levels of nested loops themselves.  Maybe that's
> unmeasurable on trivial test cases but it's going to be disastrous in
> large queries, or for relations with large numbers of foreign keys.
>
> I think this should be reverted and pushed out to 9.7 development.
> It needs a significant amount of rewriting to fix the performance
> issue, and now's not the time to be doing that.

If this gets reverted, then probably
015e88942aa50f0d419ddac00e63bb06d6e62e86 should also be reverted.

We need to make some decisions here PDQ.  I haven't had time to look
at this issue in any technical detail yet.  Simon, anyone else, please
weigh in.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 04/30/2016 07:35 PM, Tom Lane wrote:
> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>> On 29/04/2016 18:05, Tom Lane wrote:
>>> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>>>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>>>> and get_rightop() on a ScalarArrayOpExpr node.
>>>>
>>>> I'm not sure that assuming this compatibility is the right way to fix
>>>> this though.
>
>>> It certainly isn't.
>
>> Agreed. New attached patch handles explicitly each node tag.
>
> No, this is completely nuts.  The problem is that quals_match_foreign_key
> is simply assuming that every clause in the list it's given is an OpExpr,
> which is quite wrong.  It should just ignore non-OpExpr quals, since it
> cannot do anything useful with them anyway.  There's a comment claiming
> that non-OpExpr quals were already rejected:
>
>              * Here since 'usefulquals' only contains bitmap indexes for quals
>              * of type "var op var" we can safely skip checking this.
>
> but that doesn't appear to have anything to do with current reality.

Yes, I agree - there should be a check that the node is OpExpr in
quals_match_foreign_key. This is clearly a bug :-(

>
> While this in itself is about a two-line fix, after reviewing
> 137805f89acb3611 I'm pretty unhappy that it got committed at all.
> I think this obvious bug is a good sign that it wasn't ready.
> Other unfinished aspects like invention of an undocumented GUC
> don't leave a good impression either.

The GUC is meant to make testing during development easier. I haven't
realized it got committed, but I assume Simon plans to remove it before
the final release. Can't check right now with Simon, though, as he's is
out of office this week.

In any case, I do agree the GUC should have been documented better, and
we should probably put it on the TODO so that it gets removed before the
actual release.

>
> Moreover, it looks to me like this will add quite a lot of overhead,
> probably far more than is justified, because
> clauselist_join_selectivity is basically O(N^2) in the
> relation-footprint of the current join --- and not with a real small
> multiplier, either, as the functions it calls contain about four
> levels of nested loops themselves. Maybe that's unmeasurable on
> trivial test cases but it's going to be disastrous in large queries,
> or for relations with large numbers of foreign keys.
 >

That depends on a lot of factors, I guess. Attached are two SQL scripts
that create 5 tables (f1, f2, f3, f4, f5) and then create N foreign keys
on each of them. The foreign keys reference other tables, which means
the loops won't terminate early (after finding the first match) when
joining the first 5 tables, which makes this the worst case. And then we
join different numbers of tables (2, 3, 4 or 5) and do explain analyze
to measure planning time.

The first script (fk-test.sql) does joins on 2 columns, fk-test-6.sql
does joins on 6 columns (so that we also know how the number of columns
affects the planning time).

Sum of planning time for 100 queries (in milliseconds) with 2 columns,
where "2/on" means "join on 2 tables with enable_fk_estimates=on":

N    2/on    2/off    3/on    3/off    4/on    4/off    5/on    5/off
10    6    4    9    8    22    20    77    68
100    15    11    26    19    64    36    177    93
1000    97    84    217    133    516    233    1246    342

and comparison of the timings:

    2    3    4    5
10    155%    115%    114%    113%
100    142%    138%    177%    190%
1000    116%    163%    221%    364%

And with the 6 columns:

    2/on    2/off    3/on    3/off    4/on    4/off    5/on    5/off
10    25    7    23    20    96    82    344    297
100    21    14    65    33    233    104    735    328
1000    151    99    492    153    1603    320    4627    593

and comparison:

    2    3    4    5
10    371%    120%    117%    116%
100    149%    196%    224%    224%
1000    152%    322%    502%    780%

So yes, the overhead is clearly measurable, no doubt about that.

>
> I think this should be reverted and pushed out to 9.7 development.
> It needs a significant amount of rewriting to fix the performance
> issue, and now's not the time to be doing that.
>

There are probably a few reasonably simple things we could do - e.g.
ignore foreign keys with just a single column, as the primary goal of
the patch is improving estimates with multi-column foreign keys. I
believe that covers a vast majority of foreign keys in the wild.

If that's deemed insufficient, we'll have to resort to a more complex
improvement, perhaps something akin to the cache proposed in in the
unijoin patch. But if that's required, that's 9.7 material.


regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 05/02/2016 09:18 PM, Robert Haas wrote:
> On Sat, Apr 30, 2016 at 1:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>>> On 29/04/2016 18:05, Tom Lane wrote:
>>>> Julien Rouhaud <julien.rouhaud@dalibo.com> writes:
>>>>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>>>>> and get_rightop() on a ScalarArrayOpExpr node.
>>>>>
>>>>> I'm not sure that assuming this compatibility is the right way to fix
>>>>> this though.
>>
>>>> It certainly isn't.
>>
>>> Agreed. New attached patch handles explicitly each node tag.
>>
>> No, this is completely nuts.  The problem is that quals_match_foreign_key
>> is simply assuming that every clause in the list it's given is an OpExpr,
>> which is quite wrong.  It should just ignore non-OpExpr quals, since it
>> cannot do anything useful with them anyway.  There's a comment claiming
>> that non-OpExpr quals were already rejected:
>>
>>              * Here since 'usefulquals' only contains bitmap indexes for quals
>>              * of type "var op var" we can safely skip checking this.
>>
>> but that doesn't appear to have anything to do with current reality.
>>
>> While this in itself is about a two-line fix, after reviewing
>> 137805f89acb3611 I'm pretty unhappy that it got committed at all.
>> I think this obvious bug is a good sign that it wasn't ready.
>> Other unfinished aspects like invention of an undocumented GUC
>> don't leave a good impression either.
>>
>> Moreover, it looks to me like this will add quite a lot of overhead,
>> probably far more than is justified, because clauselist_join_selectivity
>> is basically O(N^2) in the relation-footprint of the current join --- and
>> not with a real small multiplier, either, as the functions it calls
>> contain about four levels of nested loops themselves.  Maybe that's
>> unmeasurable on trivial test cases but it's going to be disastrous in
>> large queries, or for relations with large numbers of foreign keys.
>>
>> I think this should be reverted and pushed out to 9.7 development.
>> It needs a significant amount of rewriting to fix the performance
>> issue, and now's not the time to be doing that.
>
> If this gets reverted, then probably
> 015e88942aa50f0d419ddac00e63bb06d6e62e86 should also be reverted.

Probably. I think the code is fine / correct, but as the FK estimation 
patch was the only user, there's not much point in keeping it if that 
gets reverted.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
David Rowley
Дата:
On 4 May 2016 at 02:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
> There are probably a few reasonably simple things we could do - e.g. ignore
> foreign keys with just a single column, as the primary goal of the patch is
> improving estimates with multi-column foreign keys. I believe that covers a
> vast majority of foreign keys in the wild.
>
> If that's deemed insufficient, we'll have to resort to a more complex
> improvement, perhaps something akin to the cache proposed in in the unijoin
> patch. But if that's required, that's 9.7 material.

I had thought that if we had a hashtable of rel OIDs which belong to
relations with has_eclass_joins == true, then we could just skip
foreign keys where the confrelid is not in the hashtable. Perhaps that
could be optimised a bit more and we could have something akin to what
predOK is for IndexOptInfo in ForeignKeyOptInfo which just gets set to
true if the relation referenced by the foreign key is in the
simple_rel_array. It's quite likely that if many foreign keys were
used, then the query would have a great number of joins, and planning
would be slow anyway.


-- David Rowley                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
David Rowley
Дата:
On 4 May 2016 at 09:18, David Rowley <david.rowley@2ndquadrant.com> wrote:
> On 4 May 2016 at 02:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>> There are probably a few reasonably simple things we could do - e.g. ignore
>> foreign keys with just a single column, as the primary goal of the patch is
>> improving estimates with multi-column foreign keys. I believe that covers a
>> vast majority of foreign keys in the wild.
>>
>> If that's deemed insufficient, we'll have to resort to a more complex
>> improvement, perhaps something akin to the cache proposed in in the unijoin
>> patch. But if that's required, that's 9.7 material.
>
> I had thought that if we had a hashtable of rel OIDs which belong to
> relations with has_eclass_joins == true, then we could just skip
> foreign keys where the confrelid is not in the hashtable. Perhaps that
> could be optimised a bit more and we could have something akin to what
> predOK is for IndexOptInfo in ForeignKeyOptInfo which just gets set to
> true if the relation referenced by the foreign key is in the
> simple_rel_array. It's quite likely that if many foreign keys were
> used, then the query would have a great number of joins, and planning
> would be slow anyway.

I've spent a few hours looking at this and I've come up with the
attached patch, which flags each ForeignKeyOptInfo to say whether its
possible to be referenced in any join condition, with the logic that
if the referenced relation is in the simple_rte_array, then it could
be referenced.

I ran some of the tests Tomas posted with 1000 FKs and a 4-way join,
with 2 join columns.

Query:
explain analyze
select * from f1
inner join f2 on f1.a = f2.a and f1.b = f2.b
inner join f3 on f1.a = f3.a and f1.b = f3.b
inner join f4 on f1.a = f4.a and f1.b = f4.b;

SET enable_fkey_estimates = on;
duration: 30 s
number of transactions actually processed: 8173
latency average: 3.671 ms
tps = 272.420508 (including connections establishing)
tps = 272.586329 (excluding connections establishing)

SET enable_fkey_estimates = off;
duration: 30 s
number of transactions actually processed: 9153
latency average: 3.278 ms
tps = 305.098852 (including connections establishing)
tps = 305.286072 (excluding connections establishing)

So there's still a 10% planner slowdown for this worst case test, but
it's far less than what it was previously with the same test case.

I just also want to add that the aim of this patch was to fix a very
real world problem which also manifests itself in TPC-H Q9, where the
join to partsupp is drastically underestimated due to the 2 column
join condition, which in our test cases caused the GROUP BY to perform
a Hash Aggregate rather than a Sort/Group Aggregate and since we don't
spill HashAggs to disk, we get OOM for a large scale test on large
scale hardware.

Here's some sample EXPLAIN output from the query in question, which I
think is a smaller scale than the 3TB test where we had issues, but
still demonstrates the issue;

Hash Join  (cost=74686.00..597734.90 rows=2400 width=23) (actual
time=564.038..11645.047 rows=11997996 loops=1)
  Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND
(lineitem.l_partkey = partsupp.ps_partkey))

Here the estimate is off 5000x.

The attached patch is intended to assist discussion at the moment.
Likely some naming could be better, and the code would need to find a
better home.

The patch also fixes the missing IsA OpExpr test.

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
David Rowley <david.rowley@2ndquadrant.com> writes:
> On 4 May 2016 at 09:18, David Rowley <david.rowley@2ndquadrant.com> wrote:
>> On 4 May 2016 at 02:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>>> There are probably a few reasonably simple things we could do - e.g. ignore
>>> foreign keys with just a single column, as the primary goal of the patch is
>>> improving estimates with multi-column foreign keys. I believe that covers a
>>> vast majority of foreign keys in the wild.

> I've spent a few hours looking at this and I've come up with the
> attached patch, which flags each ForeignKeyOptInfo to say whether its
> possible to be referenced in any join condition, with the logic that
> if the referenced relation is in the simple_rte_array, then it could
> be referenced.

My fundamental problem with the FK-selectivity patch is that it looks
an awful lot like a preliminary proof-of-concept that got committed.

I do not like the basic design: it's about as brute-force as could
possibly be.  It adds code that is executed:
 * at least once per join relation created (hence, significantly more than     the number of rels in the query; see
alsoget_joinrel_parampathinfo)   * for each inner relation in the initial input joinrel pair     * for each outer
relationin the initial joinrel pair       * for each foreign key constraint on this inner and outer rel         * for
eachkey column in that FK           * for each join qual for the current input joinrel pair             * for each
memberof the relevant EquivalenceClass
 

This is at least O(N^3) in the number of baserels in the query, not to
mention the other multipliers.  I'm not very impressed by tests that scale
only one of the multipliers (like the number of FK constraints); where the
pain is going to come in is when all of these factors are large.

I spent some time trying to make a test case that was impossibly slow,
without any really impressive result: I saw at most about a 25% growth in
planning time, for a 27-way join with one or two foreign keys per table.
I noted however that with a simple FROM list of tables, you don't really
get the full force of the combinatorial explosion, because
join_search_one_level prefers to generate left-deep trees first, and so
the first creation of a given joinrel is always N left-side rels against 1
right-side rel, causing the second level of looping to always iterate just
once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
devise join order constraints that would result in a lot of high-level
joinrels being formed with many relations on both sides, which would cause
both of the second and third levels to iterate O(N) times not just once.
I didn't have much luck, but I have zero confidence that our users won't
find such cases.

The reason it's so brute force is that it's rediscovering the same facts
over and over.  In all simple (non-outer-join) cases, the only clauses
that are of any interest are derived from EquivalenceClasses, and all
that you really need to know is "are the vars mentioned on each side
of this FK present in the same EquivalenceClass?".  ISTM that could be
determined once per FK per query and cached in or near the EC, not
expensively rediscovered at each level of joining.  I'm not sure whether
it'd be worth considering non-EC-derived equalities (ie, outer join
clauses) at all, and note that the current patch fails to do so anyway;
see below.

My other design-level complaint is that basing this on foreign keys is
fundamentally the wrong thing.  What actually matters is the unique index
underlying the FK; that is, if we have "a.x = b.y" and there's a
compatible unique index on b.y, we can conclude that no A row will match
more than one B row, whether or not an explicit FK relationship has been
declared.  So we should drive this off unique indexes instead of FKs,
first because we will find more cases, and second because the planner
already examines indexes and doesn't need any additional catalog lookups
to get the required data.  (IOW, the relcache additions that were made in
this patch series should go away too.)

Aside from the design being basically wrong, the code quality seems pretty
low.  Aside from the missing IsA check that started this thread, I found
the following problems in a quick once-over of 137805f89:

Bugs in quals_match_foreign_key():

* Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
cross-type operators, ie, what's in the clause might be int2 = int4
while the conpfeqop is int4 = int2.

* parent_ec check isn't really the right thing, since EC-derived clauses
might not have that set.  I think it may be adequate given that you only
deal with simple Vars, but at least a comment about that would be good.

* Come to think of it, you could probably dodge the commutator operator
problem altogether for clauses with nonnull parent_ec, because they must
contain a relevant equality operator.  (Although if it's redesigned as I
suggest above, the code path for a clause with parent_ec set would look
totally different from this anyway.)

* Maintaining the fkmatches bitmapset is useless expense, just use a
counter of matched keys.  Or for that matter, why not just fail
immediately if i'th key is not found?

find_best_foreign_key_quals():

* Test on enable_fkey_estimates should be one call level up to dodge the
useless doubly-nested loop in clauselist_join_selectivity (which would
make the "fast path" exit here pretty much useless)

clauselist_join_selectivity():

* "either could be zero, but not both" is a pretty unhelpful comment given
the if-test just above it.  What *could* have used some explanation is
what the next two dozen lines are doing, because they're as opaque as can
be; and the XXX comment doesn't leave a warm feeling that the author
understands it either.  I'm not prepared to opine on whether this segment
of code is correct at all without better commentary.

calc_joinrel_size_estimate():

* why is clauselist_join_selectivity applied to pushed-down clauses and
not local ones in an outer join?  If that's not an oversight, it at least
requires an explanatory comment.  Note that this means we are not applying
FK knowledge for "a left join b on x = y", though it seems like we could.

compute_semi_anti_join_factors isn't using clauselist_join_selectivity
either.  I don't know whether it should be, but if not, a comment about
why not seems appropriate.  More generally, I wonder why this logic
wasn't just folded into clauselist_selectivity.

guc.c:
undocumented GUCs are not acceptable

paths.h:
patch introduces an extern that's referenced noplace

In short, I'm not left with a warm fuzzy feeling about this patch having
been ready to commit.  The predecessor patch 015e88942 was also
underreviewed, cf 5306df283.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)

Without prejudice to anything else in this useful and detailed review,
I have a question about this.  A unique index proves that no A row
will match more than one B row, and I agree that deriving that from
unique indexes is sensible.  However, ISTM that an FK provides
additional information: we know that, modulo filter conditions on B,
every A row will match *exactly* one row B row, which can prevent us
from *underestimating* the size of the join product.  A unique index
can't do that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 05/04/2016 08:54 PM, Tom Lane wrote:
> David Rowley <david.rowley@2ndquadrant.com> writes:
>> On 4 May 2016 at 09:18, David Rowley <david.rowley@2ndquadrant.com> wrote:
>>> On 4 May 2016 at 02:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>>>> There are probably a few reasonably simple things we could do - e.g. ignore
>>>> foreign keys with just a single column, as the primary goal of the patch is
>>>> improving estimates with multi-column foreign keys. I believe that covers a
>>>> vast majority of foreign keys in the wild.
>
>> I've spent a few hours looking at this and I've come up with the
>> attached patch, which flags each ForeignKeyOptInfo to say whether its
>> possible to be referenced in any join condition, with the logic that
>> if the referenced relation is in the simple_rte_array, then it could
>> be referenced.
>
> My fundamental problem with the FK-selectivity patch is that it looks
> an awful lot like a preliminary proof-of-concept that got committed.
>
> I do not like the basic design: it's about as brute-force as could
> possibly be.  It adds code that is executed:
>
>   * at least once per join relation created (hence, significantly more than
>       the number of rels in the query; see also get_joinrel_parampathinfo)
>     * for each inner relation in the initial input joinrel pair
>       * for each outer relation in the initial joinrel pair
>         * for each foreign key constraint on this inner and outer rel
>           * for each key column in that FK
>             * for each join qual for the current input joinrel pair
>               * for each member of the relevant EquivalenceClass
>
> This is at least O(N^3) in the number of baserels in the query, not
> to mention the other multipliers. I'm not very impressed by tests
> that scale only one of the multipliers (like the number of FK
> constraints); where the pain is going to come in is when all of these
> factors are large.
>
> I spent some time trying to make a test case that was impossibly
> slow, without any really impressive result: I saw at most about a 25%
> growth in planning time, for a 27-way join with one or two foreign
> keys per table. I noted however that with a simple FROM list of
> tables, you don't really get the full force of the combinatorial
> explosion, because join_search_one_level prefers to generate
> left-deep trees first, and so the first creation of a given joinrel
> is always N left-side rels against 1 right-side rel, causing the
> second level of looping to always iterate just once. (GEQO behaves
> likewise, I think.) I spent a little time trying to devise join order
> constraints that would result in a lot of high-level joinrels being
> formed with many relations on both sides, which would cause both of
> the second and third levels to iterate O(N) times not just once. I
> didn't have much luck, but I have zero confidence that our users
> won't find such cases.

Don't know. We haven't found such extreme example either.

>
> The reason it's so brute force is that it's rediscovering the same
> facts over and over. In all simple (non-outer-join) cases, the only
> clauses that are of any interest are derived from EquivalenceClasses,
> and all that you really need to know is "are the vars mentioned on
> each side of this FK present in the same EquivalenceClass?". ISTM
> that could be determined once per FK per query and cached in or near
> the EC, not expensively rediscovered at each level of joining. I'm
> not sure whether it'd be worth considering non-EC-derived equalities
> (ie, outer join clauses) at all, and note that the current patch
> fails to do so anyway; see below.

I'm not sure it's that simple, as it also depends on the join order, so 
if we only detect that once per query we'll get incorrect estimates for 
the intermediate results. I think the approach with cache proposed by 
David few days ago is the best approach.

>
> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)

No, that's not what the patch does, and it can't use unique indexes 
instead. The patch improves estimation with multi-column foreign keys, 
when the join matches the constraint. Currently we treat the conditions 
as independent and multiply the estimated selectivities, completely 
ignoring the guarantee provided by the FK, which leads to significant 
under-estimates.

So when you have:

CREATE TABLE t1 (a1 INT, a2 INT, primary key (a1,a2));
CREATE TABLE t2 (b1 INT, b2 INT,                 FOREIGN KEY (b1,b2) REFERENCES t1(a1,a2));

and do
   SELECT * FROM t1, t2 WHERE a1=b1 AND a2=b2;

the patch realizes that is should not multiply the selectivities.

But unique indexes are insufficient for this - it's the foreign key 
between the two tables that allows us to do this.

Consider this:

CREATE TABLE t1 (a1 INT, a2 INT, UNIQUE (a1,a2));
CREATE TABLE t2 (b1 INT, b2 INT);

INSERT INTO t1 SELECT i, i FROM generate_series(1,10000) s(i);
INSERT INTO t2 SELECT 10000*random(), 10000*random()                 FROM generate_series(1,10000) s(i);

and do the same query. In this case multiplying the selectivities is the 
right thing to do, as the unique index provides no guarantees.


>
> Aside from the design being basically wrong, the code quality seems pretty
> low.  Aside from the missing IsA check that started this thread, I found
> the following problems in a quick once-over of 137805f89:
>
> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.
>
> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.
>
> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?
>
> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)
>
> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.
>
> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.
>
> guc.c:
> undocumented GUCs are not acceptable
>
> paths.h:
> patch introduces an extern that's referenced noplace
>
> In short, I'm not left with a warm fuzzy feeling about this patch having
> been ready to commit.  The predecessor patch 015e88942 was also
> underreviewed, cf 5306df283.

OK, thanks for the comments.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> My other design-level complaint is that basing this on foreign keys is
>> fundamentally the wrong thing.  What actually matters is the unique index
>> underlying the FK; that is, if we have "a.x = b.y" and there's a
>> compatible unique index on b.y, we can conclude that no A row will match
>> more than one B row, whether or not an explicit FK relationship has been
>> declared.  So we should drive this off unique indexes instead of FKs,
>> first because we will find more cases, and second because the planner
>> already examines indexes and doesn't need any additional catalog lookups
>> to get the required data.  (IOW, the relcache additions that were made in
>> this patch series should go away too.)

> Without prejudice to anything else in this useful and detailed review,
> I have a question about this.  A unique index proves that no A row
> will match more than one B row, and I agree that deriving that from
> unique indexes is sensible.  However, ISTM that an FK provides
> additional information: we know that, modulo filter conditions on B,
> every A row will match *exactly* one row B row, which can prevent us
> from *underestimating* the size of the join product.  A unique index
> can't do that.

Very good point, but unless I'm missing something, that is not what the
current patch does.  I'm not sure offhand whether that's an important
estimation failure mode currently, or if it is whether it would be
sensible to try to implement that rule entirely separately from the "at
most one" aspect, or if it isn't sensible, whether that's a sufficiently
strong reason to confine the "at most one" logic to working only with FKs
and not with bare unique indexes.

On the whole, that seems like another argument why this needs more time.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 05/04/2016 11:02 PM, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>> My other design-level complaint is that basing this on foreign keys is
>>> fundamentally the wrong thing.  What actually matters is the unique index
>>> underlying the FK; that is, if we have "a.x = b.y" and there's a
>>> compatible unique index on b.y, we can conclude that no A row will match
>>> more than one B row, whether or not an explicit FK relationship has been
>>> declared.  So we should drive this off unique indexes instead of FKs,
>>> first because we will find more cases, and second because the planner
>>> already examines indexes and doesn't need any additional catalog lookups
>>> to get the required data.  (IOW, the relcache additions that were made in
>>> this patch series should go away too.)
>
>> Without prejudice to anything else in this useful and detailed review,
>> I have a question about this.  A unique index proves that no A row
>> will match more than one B row, and I agree that deriving that from
>> unique indexes is sensible.  However, ISTM that an FK provides
>> additional information: we know that, modulo filter conditions on B,
>> every A row will match *exactly* one row B row, which can prevent us
>> from *underestimating* the size of the join product.  A unique index
>> can't do that.
>
> Very good point, but unless I'm missing something, that is not what the
> current patch does.  I'm not sure offhand whether that's an important
> estimation failure mode currently, or if it is whether it would be
> sensible to try to implement that rule entirely separately from the "at
> most one" aspect, or if it isn't sensible, whether that's a sufficiently
> strong reason to confine the "at most one" logic to working only with FKs
> and not with bare unique indexes.

FWIW it's a real-world problem with multi-column FKs. As David pointed 
out upthread, a nice example of this issue is Q9 in the TPC-H bench, 
where the underestimate leads to HashAggregate and then OOM failure.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, May 4, 2016 at 5:02 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Very good point, but unless I'm missing something, that is not what the
> current patch does.  I'm not sure offhand whether that's an important
> estimation failure mode currently, or if it is whether it would be
> sensible to try to implement that rule entirely separately from the "at
> most one" aspect, or if it isn't sensible, whether that's a sufficiently
> strong reason to confine the "at most one" logic to working only with FKs
> and not with bare unique indexes.

Tomas seems to feel that that is what the current patch does, and
indeed that it's the main point of the current patch, but you seem to
think that it doesn't do that.  Either I'm misinterpreting what one of
you is saying, or you are missing something, or his patch fails to
accomplish its intended purpose.  It seems important to figure out
which of those things is true.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

I'

>
> The reason it's so brute force is that it's rediscovering the same facts
> over and over.  In all simple (non-outer-join) cases, the only clauses
> that are of any interest are derived from EquivalenceClasses, and all
> that you really need to know is "are the vars mentioned on each side
> of this FK present in the same EquivalenceClass?".  ISTM that could be
> determined once per FK per query and cached in or near the EC, not
> expensively rediscovered at each level of joining.  I'm not sure whether
> it'd be worth considering non-EC-derived equalities (ie, outer join
> clauses) at all, and note that the current patch fails to do so anyway;
> see below.
>
> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)
>
> Aside from the design being basically wrong, the code quality seems pretty
> low.  Aside from the missing IsA check that started this thread, I found
> the following problems in a quick once-over of 137805f89:
>
> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.
>
> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.
>
> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?
>
> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)
>
> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.
>
> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.
>
> guc.c:
> undocumented GUCs are not acceptable
>
> paths.h:
> patch introduces an extern that's referenced noplace
>
> In short, I'm not left with a warm fuzzy feeling about this patch having
> been ready to commit.  The predecessor patch 015e88942 was also
> underreviewed, cf 5306df283.
>
>                         regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, May 4, 2016 at 5:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I spent some time trying to make a test case that was impossibly slow,
>> without any really impressive result: I saw at most about a 25% growth in
>> planning time, for a 27-way join with one or two foreign keys per table.
>> I noted however that with a simple FROM list of tables, you don't really
>> get the full force of the combinatorial explosion, because
>> join_search_one_level prefers to generate left-deep trees first, and so
>> the first creation of a given joinrel is always N left-side rels against 1
>> right-side rel, causing the second level of looping to always iterate just
>> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
>> devise join order constraints that would result in a lot of high-level
>> joinrels being formed with many relations on both sides, which would cause
>> both of the second and third levels to iterate O(N) times not just once.
>> I didn't have much luck, but I have zero confidence that our users won't
>> find such cases.
>
> I'

Well, that wasn't my best-ever email to the list.  Sigh.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, May 4, 2016 at 2:54 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

Have you looked at the patch David Rowley proposed to fix this by
doing some caching?  I am not crazy about accepting even a 25% growth
in planning time on a 27-way join, although maybe 27-way joins are
rare enough and vulnerable enough to bad plans that it would be worth
it if we could convince ourselves that plan quality would go up.  But
if that patch drops it to some much lesser number, we should consider
that as a possible fix.

> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.
>
> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.
>
> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?

Technically, only the first of these is a clear bug, IMHO.  But it
seems like they should all be fixed.

> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)

Yes, that's pretty stupid, and should be fixed.  Coding style is not
per project spec, either.  Also, the header comment for
find_best_foreign_key_quals and in fact the name of the function look
pretty poor.  It seems that the return value is the number of columns
in the foreign key and that an out parameter, joinqualsbitmap, whose
exact meaning doesn't seem to be documented in any comment anywhere in
the patch.

> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.

I'm pretty baffled by this code, too.  I think what the overlap stuff
is doing is trying to calculate selectivity when we match multiple
foreign key constraints, but it doesn't look very principled.
find_best_foreign_key_quals discards "shorter" matches entirely,
picking arbitrarily among longer ones, but then we try to deal with
all of the ones that survive that stage even if they overlap.  It's
hard to judge whether any of this makes sense without more
explanation.

> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.

Good questions.

> guc.c:
> undocumented GUCs are not acceptable

Agreed.

> paths.h:
> patch introduces an extern that's referenced noplace

Oops.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
David Rowley
Дата:
On 5 May 2016 at 06:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> David Rowley <david.rowley@2ndquadrant.com> writes:
>> On 4 May 2016 at 09:18, David Rowley <david.rowley@2ndquadrant.com> wrote:
>>> On 4 May 2016 at 02:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>>>> There are probably a few reasonably simple things we could do - e.g. ignore
>>>> foreign keys with just a single column, as the primary goal of the patch is
>>>> improving estimates with multi-column foreign keys. I believe that covers a
>>>> vast majority of foreign keys in the wild.
>
>> I've spent a few hours looking at this and I've come up with the
>> attached patch, which flags each ForeignKeyOptInfo to say whether its
>> possible to be referenced in any join condition, with the logic that
>> if the referenced relation is in the simple_rte_array, then it could
>> be referenced.
>
> My fundamental problem with the FK-selectivity patch is that it looks
> an awful lot like a preliminary proof-of-concept that got committed.

I don't disagree that there were some mistakes made. The last time I
saw this work was when I proposed some changes to Tomas' patch, which
was quite a long time ago now. I'd not gotten to look at it since
then.

> I do not like the basic design: it's about as brute-force as could
> possibly be.  It adds code that is executed:
>
>   * at least once per join relation created (hence, significantly more than
>       the number of rels in the query; see also get_joinrel_parampathinfo)
>     * for each inner relation in the initial input joinrel pair
>       * for each outer relation in the initial joinrel pair
>         * for each foreign key constraint on this inner and outer rel
>           * for each key column in that FK
>             * for each join qual for the current input joinrel pair
>               * for each member of the relevant EquivalenceClass
>
> This is at least O(N^3) in the number of baserels in the query, not to
> mention the other multipliers.  I'm not very impressed by tests that scale
> only one of the multipliers (like the number of FK constraints); where the
> pain is going to come in is when all of these factors are large.

Yes, it's quite a lot of nesting. The patch I sent yesterday helps to
reduce the number of foreign keys considered. The part I'm not all
that happy with now is the fact that quite a bit of repeat work gets
done during the join search. It would be nice to cache some of this
but nothing came to mind, as we need to record the position of each
joinqual so that we only estimate the non-matched ones with the
standard estimation technique. The order of, or items in that list
won't be fixed as more relations are added to the mix during the join
search.

> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

Did you test that with or without my patch from yesterday?

> The reason it's so brute force is that it's rediscovering the same facts
> over and over.  In all simple (non-outer-join) cases, the only clauses
> that are of any interest are derived from EquivalenceClasses, and all
> that you really need to know is "are the vars mentioned on each side
> of this FK present in the same EquivalenceClass?".  ISTM that could be
> determined once per FK per query and cached in or near the EC, not
> expensively rediscovered at each level of joining.  I'm not sure whether
> it'd be worth considering non-EC-derived equalities (ie, outer join
> clauses) at all, and note that the current patch fails to do so anyway;
> see below.

That's interesting, but requires a good bit of thought to how it might work.

> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)

I think your wires are crossed to what this patch actually does. A
unique index could only prove that no more than 1 rows exists. This
goes to prove that exactly 1 exists, then will reduce that estimate by
any other join conditions which were not matched to a foreign key.
Perhaps there is some improvements to be made to the way estimations
work using unique indexes, but that's another problem that Tomas was
not aiming to tackle with this patch.

> Aside from the design being basically wrong, the code quality seems pretty
> low.  Aside from the missing IsA check that started this thread, I found
> the following problems in a quick once-over of 137805f89:
>
> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.

Yes. I did write this and yes that does need to test for the
commutative operator as the order of the condition could be reversed
and the types may not match.

> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.

Its not all that clear to me which cases the patch would fail to find
the FK because of this. If there's no parent_ec won't it be matched
manually by the code in the else condition?

> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?

I'd say that's a wrong assumption and that check still needs to be
there as a duplicate condition could up the number of matches. We
can't simply count the number of joinquals matched and assume that if
we find that number to be the same as the number of keys in the
foreign key that all is well... One key could be matched twice, and
another not at all.

>
> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)

If you ask me that GUC needs to be removed. As far as I understand it
that was only ever for testing. He's not around to ask at the moment,
but I'd imagine it was always destine to be removed before release.

> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.

Yes, I agree.

> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.
>
> guc.c:
> undocumented GUCs are not acceptable

It should be removed.

>
> paths.h:
> patch introduces an extern that's referenced noplace

That's not great :-(

> In short, I'm not left with a warm fuzzy feeling about this patch having
> been ready to commit.  The predecessor patch 015e88942 was also
> underreviewed, cf 5306df283.

I've started making some improvements to this, but need to talk to
Tomas. It's currently in the middle of his night, but will try to
catch him in his morning to discuss this with him.

-- David Rowley                   http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
David Rowley
Дата:
On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
> I've started making some improvements to this, but need to talk to
> Tomas. It's currently in the middle of his night, but will try to
> catch him in his morning to discuss this with him.

Ok, so I spoke to Tomas about this briefly, and he's asked me to send
in this patch. He didn't get time to look over it due to some other
commitments he has today.

I do personally feel that if the attached is not good enough, or not
very close to good enough then probably the best course of action is
to revert the whole thing. I'd much rather have this out than to feel
some responsibility for someone's performance problems with this
feature. But I also do feel that the patch allows a solution to a big
problem that we currently have with PostgreSQL, which there's any easy
workarounds for... that's multi-column join estimates.

In the attached I've left the GUC remaining. The reason for the GUC is
for testing purposes and it should be removed before release. It
should likely be documented though, even if we're planning to remove
it later. I've not gotten around to that in this patch though.


--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 05/05/2016 04:48 PM, David Rowley wrote:
> On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
>> I've started making some improvements to this, but need to talk to
>> Tomas. It's currently in the middle of his night, but will try to
>> catch him in his morning to discuss this with him.
>
> Ok, so I spoke to Tomas about this briefly, and he's asked me to
> send in this patch. He didn't get time to look over it due to some
> other commitments he has today.
>

Thanks!
>
> I do personally feel that if the attached is not good enough, or not
> very close to good enough then probably the best course of action is
> to revert the whole thing. I'd much rather have this out than to
> feel some responsibility for someone's performance problems with
> this feature.>

I share this opinion. If the approach proposed here is deemed not good 
enough, then +1 to revert.

I think we can further limit the impact by ignoring foreign keys on a 
single column, because the primary goal of the patch is improving 
estimates with multi-column FKs (and matching joins). I'd argue that 99% 
of the FKs in practice is single-column ones, but sure - if you have a 
database with many multi-column FKs this won't help.

I think it's also important to point out that whatever solution we 
choose, it should probably allow us to relax some of the restrictions in 
the future. For example, with a FK on 3 columns, the current patch only 
handles a "full match" joins, with conditions on all three columns. But 
it may be possible to also improve estimates on just two columns - the 
current patch does not do that, as that needs definitely further more 
thought and discussion.
>
> But I also do feel that the patch allows a solution to a big problem
> that we currently have with PostgreSQL, which there's any easy
> workarounds for... that's multi-column join estimates.
>

In a sense, yes. FKs allow us to improve estimates for a fairly narrow 
case of multi-column estimates - joins matching multi-column FKs. And 
for this (not entirely narrow) case they are actually more accurate and 
way cheaper than e.g. histograms or MCV lists (at least in the 
multi-column case).

FWIW the current multivariate stats patch does not even try to mess with 
joins, as it's quite complicated in the multi-column case.

>
> In the attached I've left the GUC remaining. The reason for the GUC
> is for testing purposes and it should be removed before release. It
> should likely be documented though, even if we're planning to remove
> it later. I've not gotten around to that in this patch though.
>

Yes, removal of the GUC should go to the Open Items list.


I'd also like to make it clear that all the shitty parts of the patch 
are my fault, not David's. His name is on the patch as he provided a lot 
of useful ideas, pieces of code and feedback in general. But it was up 
to me to craft the final patch - and I clearly failed to do so. I think 
David still feels responsible for the patch as he made a lot of effort 
to salvage it over the last few days, so I'd like to set the record 
straight.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
David Rowley
Дата:
On 6 May 2016 at 02:48, David Rowley <david.rowley@2ndquadrant.com> wrote:
> In the attached I've left the GUC remaining. The reason for the GUC is
> for testing purposes and it should be removed before release. It
> should likely be documented though, even if we're planning to remove
> it later. I've not gotten around to that in this patch though.

I've attached a patch which is the bear minimum fix for the crash
reported by Stefan. I don't think we need to debate any of whether
this goes in.

I also included removing that bogus function declaration from paths.h
as it was upsetting me a bit, and also there should be no debate on
that.


--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Thu, May 5, 2016 at 10:48 AM, David Rowley
<david.rowley@2ndquadrant.com> wrote:
> On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
>> I've started making some improvements to this, but need to talk to
>> Tomas. It's currently in the middle of his night, but will try to
>> catch him in his morning to discuss this with him.
>
> Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> in this patch. He didn't get time to look over it due to some other
> commitments he has today.
>
> I do personally feel that if the attached is not good enough, or not
> very close to good enough then probably the best course of action is
> to revert the whole thing.

Tom, what do you think about this patch?  Is it good enough, or should
we revert the whole thing?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Simon Riggs
Дата:
On 3 May 2016 at 16:10, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
 
While this in itself is about a two-line fix, after reviewing
137805f89acb3611 I'm pretty unhappy that it got committed at all.
I think this obvious bug is a good sign that it wasn't ready.
Other unfinished aspects like invention of an undocumented GUC
don't leave a good impression either.

The GUC is meant to make testing during development easier. I haven't realized it got committed, but I assume Simon plans to remove it before the final release.

Yes, the GUC was for testing, as mentioned on the commit message.
 
Can't check right now with Simon, though, as he's is out of office this week.

Am back and reading.
 
--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Simon Riggs
Дата:
On 6 May 2016 at 01:00, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
 
I think we can further limit the impact by ignoring foreign keys on a single column, because the primary goal of the patch is improving estimates with multi-column FKs (and matching joins). I'd argue that 99% of the FKs in practice is single-column ones, but sure - if you have a database with many multi-column FKs this won't help.

I think it's also important to point out that whatever solution we choose, it should probably allow us to relax some of the restrictions in the future. For example, with a FK on 3 columns, the current patch only handles a "full match" joins, with conditions on all three columns. But it may be possible to also improve estimates on just two columns - the current patch does not do that, as that needs definitely further more thought and discussion.

The context of the patch is important. If we can save this for 9.6, we should.

Multi-column joins are currently badly estimated. Multi-variate States would be useful here also, but I chose not to commit that patch since it was much larger. The current patch is small, isolated and yet very effective in solving the estimation problem in some cases. Committing and possibly reverting the feature was/is possible without collateral damage.

The current patch works only for multi-column joins, matching them against multi-column FKs. I see no reason why that should give a massively negative performance because that is a small subset of cases; I did test that to check for regressions, but its possible I missed something that does cause additional run-time in real world cases.

Clearly, an optimizer test suite would be helpful in analyzing the effect of new patches.

I'll continue to look at this and comment further.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

Attached is a minor revision of the patch posted by David a few days
ago, rebased on the current master (which already includes 68d704 fixing
the segfault that started this thread).

The modifications are fairly small:

* The 'possibleRef' flag is renamed to 'use_for_estimation' which I
think better describes the purpose of the flag.

* The mark_useful_foreign_keys now skips foreign keys on a single
column, as those are not useful for the patch at all. This should
further reduce performance impact in the common case.

* I've slightly reworded some of the comments, hopefully for the better.


But now the bad news - while reviewing the David's fixup patch, I've
noticed this comment

     /* XXX do we need to add entries for the append_rel_list here? */

and I've realized that the estimation does not quite work with
partitioned tables, as it only checks foreign keys on the parent tables,
but the child tables may not use the foreign keys at all, or even use
different foreign keys (a bit bizzare, but possible).

Obviously the simplest solution would be to simply stop considering
foreign keys with partitioned tables - that seems a bit unfortunate, but
AFAIK we don't inspect child tables during planning anyway, and in the
worst case we fall back to the current estimate.

It might be possible to improve this by checking that all child tables
have a matching foreign key (referencing the same table), which would
allow us to handle the case with one side partitioned. And maybe some
other (more complex) cases, like equi-partitioned tables. But all of
this would require a fair amount of new code, far more than we should
commit in beta mode.


To summarize all of this, I think David's patch marking usable foreign
keys greatly reduces the overhead compared to the committed version. The
'skip 1-column FKs' significantly reduces (or even eliminates) the
overhead for the common case where only few FKs use multiple columns.

Not handling the inheritance properly is clearly a serious oversight,
though. Tom is clearly right that this got committed a bit too early.


If the conclusion is that the current performance impact is still not
acceptable, or that we need a better solution to the inheritance issue
than simply disabling the FK estimates, then I think reverting the patch
is the only possible solution at this point.


regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Вложения

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Simon Riggs
Дата:
On 9 May 2016 at 00:24, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
Hi,

Attached is a minor revision of the patch posted by David a few days ago, rebased on the current master (which already includes 68d704 fixing the segfault that started this thread).

The modifications are fairly small:

* The 'possibleRef' flag is renamed to 'use_for_estimation' which I think better describes the purpose of the flag.

* The mark_useful_foreign_keys now skips foreign keys on a single column, as those are not useful for the patch at all. This should further reduce performance impact in the common case.

Good, thanks.
 
* I've slightly reworded some of the comments, hopefully for the better.
 
But now the bad news - while reviewing the David's fixup patch, I've noticed this comment

    /* XXX do we need to add entries for the append_rel_list here? */

and I've realized that the estimation does not quite work with partitioned tables, as it only checks foreign keys on the parent tables, but the child tables may not use the foreign keys at all, or even use different foreign keys (a bit bizzare, but possible).

Obviously the simplest solution would be to simply stop considering foreign keys with partitioned tables - that seems a bit unfortunate, but AFAIK we don't inspect child tables during planning anyway, and in the worst case we fall back to the current estimate.

It might be possible to improve this by checking that all child tables have a matching foreign key (referencing the same table), which would allow us to handle the case with one side partitioned. And maybe some other (more complex) cases, like equi-partitioned tables. But all of this would require a fair amount of new code, far more than we should commit in beta mode.


To summarize all of this, I think David's patch marking usable foreign keys greatly reduces the overhead compared to the committed version. The 'skip 1-column FKs' significantly reduces (or even eliminates) the overhead for the common case where only few FKs use multiple columns.

Not handling the inheritance properly is clearly a serious oversight, though. Tom is clearly right that this got committed a bit too early.

If the conclusion is that the current performance impact is still not acceptable, or that we need a better solution to the inheritance issue than simply disabling the FK estimates, then I think reverting the patch is the only possible solution at this point.

I disagree.

The purpose of the patch is to improve planning in simple and important common cases. It does that. The fact that it does not cover all cases is understood and we will make more progress in future releases. We don't handle the inheritance case well in many ways is not this patches' fault, or problem.

There's no point putting years of effort into parallel query if we can't work out when to use it sensibly. This patch makes major gains in that area.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Noah Misch
Дата:
On Fri, May 06, 2016 at 03:06:01PM -0400, Robert Haas wrote:
> On Thu, May 5, 2016 at 10:48 AM, David Rowley
> <david.rowley@2ndquadrant.com> wrote:
> > On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
> >> I've started making some improvements to this, but need to talk to
> >> Tomas. It's currently in the middle of his night, but will try to
> >> catch him in his morning to discuss this with him.
> >
> > Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> > in this patch. He didn't get time to look over it due to some other
> > commitments he has today.
> >
> > I do personally feel that if the attached is not good enough, or not
> > very close to good enough then probably the best course of action is
> > to revert the whole thing.
> 
> Tom, what do you think about this patch?  Is it good enough, or should
> we revert the whole thing?

[This is a generic notification.]

The above-described topic is currently a PostgreSQL 9.6 open item.  Simon,
since you committed the patch believed to have created it, you own this open
item.  If some other commit is more relevant or if this does not belong as a
9.6 open item, please let us know.  Otherwise, please observe the policy on
open item ownership[1] and send a status update within 72 hours of this
message.  Include a date for your subsequent status update.  Testers may
discover new open items at any time, and I want to plan to get them all fixed
well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
efforts toward speedy resolution.  Thanks.

[1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Noah Misch
Дата:
On Sun, May 29, 2016 at 01:36:01AM -0400, Noah Misch wrote:
> On Fri, May 06, 2016 at 03:06:01PM -0400, Robert Haas wrote:
> > On Thu, May 5, 2016 at 10:48 AM, David Rowley
> > <david.rowley@2ndquadrant.com> wrote:
> > > On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
> > >> I've started making some improvements to this, but need to talk to
> > >> Tomas. It's currently in the middle of his night, but will try to
> > >> catch him in his morning to discuss this with him.
> > >
> > > Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> > > in this patch. He didn't get time to look over it due to some other
> > > commitments he has today.
> > >
> > > I do personally feel that if the attached is not good enough, or not
> > > very close to good enough then probably the best course of action is
> > > to revert the whole thing.
> > 
> > Tom, what do you think about this patch?  Is it good enough, or should
> > we revert the whole thing?
> 
> [This is a generic notification.]
> 
> The above-described topic is currently a PostgreSQL 9.6 open item.  Simon,
> since you committed the patch believed to have created it, you own this open
> item.  If some other commit is more relevant or if this does not belong as a
> 9.6 open item, please let us know.  Otherwise, please observe the policy on
> open item ownership[1] and send a status update within 72 hours of this
> message.  Include a date for your subsequent status update.  Testers may
> discover new open items at any time, and I want to plan to get them all fixed
> well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
> efforts toward speedy resolution.  Thanks.
> 
> [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com

This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
a status update within 24 hours, and include a date for your subsequent status
update.  Refer to the policy on open item ownership:
http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Wed, Jun 1, 2016 at 9:29 PM, Noah Misch <noah@leadboat.com> wrote:
> This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com

FYI, I spoke to Tom Lane about this at PGCon and suggested that he
look at the proposed patch as I requested in
https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt30w@mail.gmail.com
and see whether that would address his concerns, and he said that he
would do that.  It may, however, have slipped his mind.

My opinion is that something needs to be done about this patch.  It
needs to be improved or reverted.  Improved would be ideal in my mind,
but reverted is an outcome I can live with.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

On 06/02/2016 04:18 PM, Robert Haas wrote:
> My opinion is that something needs to be done about this patch. It
> needs to be improved or reverted. Improved would be ideal in my
> mind, but reverted is an outcome I can live with.

FWIW I'm ready to put my time into fixing the issues, but only if 
there's a consensus it's not doomed anyway (for 9.6).

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> FYI, I spoke to Tom Lane about this at PGCon and suggested that he
> look at the proposed patch as I requested in
> https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt30w@mail.gmail.com
> and see whether that would address his concerns, and he said that he
> would do that.  It may, however, have slipped his mind.

Not entirely, though I've been wrapped up in pg_dump fixes for the last
little while.  I'll get to this soon.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Noah Misch
Дата:
On Wed, Jun 01, 2016 at 09:29:54PM -0400, Noah Misch wrote:
> On Sun, May 29, 2016 at 01:36:01AM -0400, Noah Misch wrote:
> > On Fri, May 06, 2016 at 03:06:01PM -0400, Robert Haas wrote:
> > > On Thu, May 5, 2016 at 10:48 AM, David Rowley
> > > <david.rowley@2ndquadrant.com> wrote:
> > > > On 5 May 2016 at 16:04, David Rowley <david.rowley@2ndquadrant.com> wrote:
> > > >> I've started making some improvements to this, but need to talk to
> > > >> Tomas. It's currently in the middle of his night, but will try to
> > > >> catch him in his morning to discuss this with him.
> > > >
> > > > Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> > > > in this patch. He didn't get time to look over it due to some other
> > > > commitments he has today.
> > > >
> > > > I do personally feel that if the attached is not good enough, or not
> > > > very close to good enough then probably the best course of action is
> > > > to revert the whole thing.
> > > 
> > > Tom, what do you think about this patch?  Is it good enough, or should
> > > we revert the whole thing?
> > 
> > [This is a generic notification.]
> > 
> > The above-described topic is currently a PostgreSQL 9.6 open item.  Simon,
> > since you committed the patch believed to have created it, you own this open
> > item.  If some other commit is more relevant or if this does not belong as a
> > 9.6 open item, please let us know.  Otherwise, please observe the policy on
> > open item ownership[1] and send a status update within 72 hours of this
> > message.  Include a date for your subsequent status update.  Testers may
> > discover new open items at any time, and I want to plan to get them all fixed
> > well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
> > efforts toward speedy resolution.  Thanks.
> > 
> > [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com
> 
> This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com

IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long past due
for your status update.  Please reacquaint yourself with the policy on open
item ownership[1] and then reply immediately.  If I do not hear from you by
2016-06-04 15:00 UTC, I will transfer this item to release management team
ownership without further notice.

[1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Robert Haas <robertmhaas@gmail.com> writes:
> FYI, I spoke to Tom Lane about this at PGCon and suggested that he
> look at the proposed patch as I requested in
> https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt30w@mail.gmail.com
> and see whether that would address his concerns, and he said that he
> would do that.  It may, however, have slipped his mind.

> My opinion is that something needs to be done about this patch.  It
> needs to be improved or reverted.  Improved would be ideal in my mind,
> but reverted is an outcome I can live with.

I've taken a look at fkest_fixes_v2.patch.  I do not like
mark_useful_foreign_keys one bit: it adds *more* overhead to *every*
query, whether or not there's anything to be saved.  Also, adding a
flag to the ForeignKeyOptInfo entries doesn't save having to iterate
over the irrelevant entries.  We should arrange for them not to be
in the list in the first place.

The direction I would go in to get rid of irrelevant FKs is more
like this:

* Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
not just constraint OIDs.  It's insane that the relcache scans
pg_constraint to collect those OIDs and then the planner re-reads all
those same rows on every planning cycle.  Allow the relcache to return
a pointer to the list in-cache, and require the planner to copy that
data before making any more cache requests.  The planner would run
through the list, skipping single-key entries and entries leading to
irrelevant tables, and copy out just the items that are useful for
the current query (probably adding query-specific table RTE indexes
at the same time).

* Personally I'd probably handle the "ignore irrelevant tables" test
with a list_member_oid test on a list of relation OIDs, not a hashtable.
It's unlikely that there are enough tables in the query to justify
building a hashtable.

* All of the above should happen only if the query contains multiple
tables; there is no reason to expend even one cycle on loading FK data
in a simple single-table query.

But more generally, this entire approach does very little to improve
the problem of nested loops leading to probable poor big-O behavior.
I previously complained that the patch adds code that is executed

>   * at least once per join relation created (hence, significantly more than
>       the number of rels in the query; see also get_joinrel_parampathinfo)
>     * for each inner relation in the initial input joinrel pair
>       * for each outer relation in the initial joinrel pair
>         * for each foreign key constraint on this inner and outer rel
>           * for each key column in that FK
>             * for each join qual for the current input joinrel pair
>               * for each member of the relevant EquivalenceClass

and only the fourth of those seven nested loops is significantly shortened
by this patch.  It's still about as brute-force as can be.  I think that
fkest_fixes_v2.patch really only makes things significantly better for
cases where a table in the query has a huge number of FKs leading to
tables not in the query, and TBH I think that's an artificial test
condition that will seldom have much to do with real-world performance.

It's also pretty bad that "for each foreign key constraint on this inner
and outer rel" means "for each FK leading out of either the inner or outer
rel, whether or not it leads to the other side".  The existing code has
to do that in order to handle cases where the particular representative
clause generated from an EC doesn't match the FK but another clause could
have.  I still think that some more careful thought about relating FKs
to ECs could result in elimination of a couple of these looping levels
altogether.

This is certainly all attackable, and I'm even willing to help work on it,
but it's going to lead to a rather larger patch than we usually like to
apply in beta.  And there are still all the non-planning-speed cleanup
issues to be dealt with.  fkest_fixes_v2.patch addresses some of them
but I still feel that a lot of work is needed on the comments.

Also, there are serious bugs remaining, even without considering planning
speed.  An example I just noticed is that quals_match_foreign_key actually
fails entirely to ensure that it's found a match to the FK, because there
is no check on whether foreignrel has anything to do with the FK.  That
is, this bit:
        * We make no attempt in checking that this foreign key actually        * references 'foreignrel', the reasoning
hereis that we may be able        * to match the foreign key to an eclass member Var of a RestrictInfo        * that's
inqualslist, this Var may belong to some other relation.
 

would be okay if you checked after identifying a matching eclass member
that it belonged to the FK's referenced table, but AFAICS there is no such
check anywhere.

We need to decide whether we want to put a significant amount of time
into this patch over the next couple of weeks, to the exclusion of other
things, or revert it until the next devel cycle opens.  Since we're not
exactly just sitting around twiddling our thumbs, I'm hesitant to commit
the required amount of time to pursue the first course.  But we've
commissioned the RMT to make these sorts of decisions, so it's in their
lap.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Noah Misch
Дата:
On Fri, Jun 03, 2016 at 10:32:24AM -0400, Noah Misch wrote:
> On Wed, Jun 01, 2016 at 09:29:54PM -0400, Noah Misch wrote:
> > On Sun, May 29, 2016 at 01:36:01AM -0400, Noah Misch wrote:
> > > On Fri, May 06, 2016 at 03:06:01PM -0400, Robert Haas wrote:
> > > > On Thu, May 5, 2016 at 10:48 AM, David Rowley <david.rowley@2ndquadrant.com> wrote:
> > > > > Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> > > > > in this patch. He didn't get time to look over it due to some other
> > > > > commitments he has today.
> > > > >
> > > > > I do personally feel that if the attached is not good enough, or not
> > > > > very close to good enough then probably the best course of action is
> > > > > to revert the whole thing.
> > > > 
> > > > Tom, what do you think about this patch?  Is it good enough, or should
> > > > we revert the whole thing?
> > > 
> > > [This is a generic notification.]
> > > 
> > > The above-described topic is currently a PostgreSQL 9.6 open item.  Simon,
> > > since you committed the patch believed to have created it, you own this open
> > > item.  If some other commit is more relevant or if this does not belong as a
> > > 9.6 open item, please let us know.  Otherwise, please observe the policy on
> > > open item ownership[1] and send a status update within 72 hours of this
> > > message.  Include a date for your subsequent status update.  Testers may
> > > discover new open items at any time, and I want to plan to get them all fixed
> > > well in advance of shipping 9.6rc1.  Consequently, I will appreciate your
> > > efforts toward speedy resolution.  Thanks.
> > 
> > This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
> > a status update within 24 hours, and include a date for your subsequent status
> > update.  Refer to the policy on open item ownership:
> > http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com
> 
> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long past due
> for your status update.  Please reacquaint yourself with the policy on open
> item ownership[1] and then reply immediately.  If I do not hear from you by
> 2016-06-04 15:00 UTC, I will transfer this item to release management team
> ownership without further notice.
> 
> [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com

This PostgreSQL 9.6 open item now needs a permanent owner.  I want PostgreSQL
to have this planner functionality, but I cannot both give it the attention it
needs and meet commitments predating this open item.  Would any other
committer like to take ownership?  If this role interests you, please read
this thread and the policy linked above, then send an initial status update
bearing a date for your subsequent status update.  If the item does not have a
permanent owner by 2016-06-07 22:00 UTC, I will resolve the item by reverting
commits 68d704e and 137805f.

Thanks,
nm



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
David Rowley <david.rowley@2ndquadrant.com> writes:
> I think your wires are crossed to what this patch actually does. A
> unique index could only prove that no more than 1 rows exists. This
> goes to prove that exactly 1 exists, then will reduce that estimate by
> any other join conditions which were not matched to a foreign key.

BTW, I thought some more about this, and I believe the patch is a few
bricks shy of a load in this respect.  An FK constraint will enforce that
a referencing row matches exactly one referenced row only if all the
referencing columns are marked NOT NULL.  If any nulls are allowed, then
we are back to the unique-index situation, ie we can only conclude that
there is at most one matching row.

I do not think this means that we must dial the patch back to only
considering FKs that have NOT NULL on all their columns.  If we could
estimate the fraction of referencing rows that have any nulls, we could
still arrive at a selectivity estimate that's better than we get when
disregarding the FK altogether: instead of 1/num_referenced_rows it'd be
fraction-of-referencing-rows-without-nulls/num_referenced_rows, since the
rows containing nulls are guaranteed to have 0 matches rather than 1.
However, since the statistics we have at hand only tell us the fraction of
nulls in each column separately, making a fraction-with-any-nulls estimate
for a multi-column FK is going to be pretty spongy.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
Hi,

While this thread was effectively superseded by the 'new design' thread 
[1], I'd like to address a few points raised here, as they are relevant 
for the new design (at least I believe so).

[1] https://www.postgresql.org/message-id/31041.1465069446@sss.pgh.pa.us

On 06/04/2016 08:15 PM, Tom Lane wrote:
...
>
> * Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
> not just constraint OIDs.  It's insane that the relcache scans
> pg_constraint to collect those OIDs and then the planner re-reads all
> those same rows on every planning cycle.  Allow the relcache to return
> a pointer to the list in-cache, and require the planner to copy that
> data before making any more cache requests.  The planner would run
> through the list, skipping single-key entries and entries leading to
> irrelevant tables, and copy out just the items that are useful for
> the current query (probably adding query-specific table RTE indexes
> at the same time).

That seems like a fairly straightforward change, and I'm not opposed to 
doing that. However RelationGetFKeyList is basically a modified copy of 
RelationGetIndexList, so it shares the same general behavior, including 
caching of OIDs and then constructing IndexOptInfo objects on each 
get_relation_info() call. Why is it 'insane' for foreign keys but not 
for indexes? Or what am I missing?

>
> * Personally I'd probably handle the "ignore irrelevant tables" test
> with a list_member_oid test on a list of relation OIDs, not a
> hashtable. It's unlikely that there are enough tables in the query to
> justify building a hashtable.

OK

>
> * All of the above should happen only if the query contains multiple
> tables; there is no reason to expend even one cycle on loading FK data
> in a simple single-table query.

OK

>
> ... snip the part about nested loops (new design thread) ...
>
> Also, there are serious bugs remaining, even without considering planning
> speed.  An example I just noticed is that quals_match_foreign_key actually
> fails entirely to ensure that it's found a match to the FK, because there
> is no check on whether foreignrel has anything to do with the FK.  That
> is, this bit:
>
>          * We make no attempt in checking that this foreign key actually
>          * references 'foreignrel', the reasoning here is that we may be able
>          * to match the foreign key to an eclass member Var of a RestrictInfo
>          * that's in qualslist, this Var may belong to some other relation.
>
> would be okay if you checked after identifying a matching eclass member
> that it belonged to the FK's referenced table, but AFAICS there is no such
> check anywhere.
>

Perhaps I'm missing something, but I thought this is checked by these 
conditions in quals_match_foreign_key():

1) with ECs (line 3990)
if (foreignrel->relid == var->varno &&    fkinfo->confkeys[i] == var->varattno)    foundvarmask |= 1;

2) without ECs (line 4019)

if ((foreignrel->relid == leftvar->varno) &&    (fkrel->relid == rightvar->varno) &&    (fkinfo->confkeys[i] ==
leftvar->varattno)&&    (fkinfo->conkeys[i] == rightvar->varattno)){    ...}else if ((foreignrel->relid ==
rightvar->varno)&&         (fkrel->relid == leftvar->varno) &&         (fkinfo->confkeys[i] == rightvar->varattno) &&
     (fkinfo->conkeys[i] == leftvar->varattno)){    ...}
 

Or does that fail for some reason?

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> On 06/04/2016 08:15 PM, Tom Lane wrote:
>> * Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
>> not just constraint OIDs.  It's insane that the relcache scans
>> pg_constraint to collect those OIDs and then the planner re-reads all
>> those same rows on every planning cycle.

> That seems like a fairly straightforward change, and I'm not opposed to 
> doing that. However RelationGetFKeyList is basically a modified copy of 
> RelationGetIndexList, so it shares the same general behavior, including 
> caching of OIDs and then constructing IndexOptInfo objects on each 
> get_relation_info() call. Why is it 'insane' for foreign keys but not 
> for indexes? Or what am I missing?

I would not be in favor of migrating knowledge of IndexOptInfo into the
relcache; it's too planner-specific.  Also, it mostly copies info from
the index's relcache entry, not the parent relation's (which for one
thing would imply locking hazards if we tried to cache that info in the
parent rel).  But for foreign keys, we can cache an image of certain
well-defined fields of certain well-defined rows of pg_constraint, and
that seems like a reasonably arm's-length definition of a responsibility
to give the relcache, especially when it has to visit all and only those
same rows to construct what it's caching now.

>> would be okay if you checked after identifying a matching eclass member
>> that it belonged to the FK's referenced table, but AFAICS there is no such
>> check anywhere.

> Perhaps I'm missing something, but I thought this is checked by these 
> conditions in quals_match_foreign_key():

> 1) with ECs (line 3990)

>     if (foreignrel->relid == var->varno &&
>         fkinfo->confkeys[i] == var->varattno)
>         foundvarmask |= 1;

This checks that you found a joinclause mentioning foreignrel.  But
foreignrel need have nothing to do with the foreign key; it could be any
table in the query.  That comparison of confkeys[] and varattno is thus
checking for column-number equality of two columns that might be from
different relations.  That is, if we have an FK "A.X references B.Y",
and the query contains "A.X = C.Z", this code could match the FK to that
clause if Y and Z happen to have the same data types and column numbers.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
... BTW, another thought occurred to me yesterday: it seems like the
existing code hasn't thought through its behavior for multiple foreign
keys very carefully.  That is, suppose we have both "A.J references B.K"
and "A.X references B.Y", as separate FKs not a single multicolumn FK.
The current code goes to some lengths to decide that one of these is
better than the other and then ignore the other.  Why?  Seems to me
that in such a case you want to behave more nearly as you would for a
multicolumn FK, that is discard all the join quals matched to either FK
in favor of a single selectivity estimate based on the number of rows in
the referenced table.  Discarding only the A.J = B.K clause and then
multiplying by the independent selectivity of A.X = B.Y is surely just as
wrong as what we've historically done for multicolumn FKs.  (Correcting
for nulls would take a bit of thought, but I wouldn't be surprised if it
ends up being the same as for the multicolumn-FK case, at least to within
the precision we can hope to get with the available stats for nulls.)
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
On 06/06/2016 06:15 PM, Tom Lane wrote:
> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
>> On 06/04/2016 08:15 PM, Tom Lane wrote:
>>> * Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
>>> not just constraint OIDs.  It's insane that the relcache scans
>>> pg_constraint to collect those OIDs and then the planner re-reads all
>>> those same rows on every planning cycle.
>
>> That seems like a fairly straightforward change, and I'm not opposed to
>> doing that. However RelationGetFKeyList is basically a modified copy of
>> RelationGetIndexList, so it shares the same general behavior, including
>> caching of OIDs and then constructing IndexOptInfo objects on each
>> get_relation_info() call. Why is it 'insane' for foreign keys but not
>> for indexes? Or what am I missing?
>
> I would not be in favor of migrating knowledge of IndexOptInfo into the
> relcache; it's too planner-specific.  Also, it mostly copies info from
> the index's relcache entry, not the parent relation's (which for one
> thing would imply locking hazards if we tried to cache that info in the
> parent rel).  But for foreign keys, we can cache an image of certain
> well-defined fields of certain well-defined rows of pg_constraint, and
> that seems like a reasonably arm's-length definition of a responsibility
> to give the relcache, especially when it has to visit all and only those
> same rows to construct what it's caching now.
>
>>> would be okay if you checked after identifying a matching eclass member
>>> that it belonged to the FK's referenced table, but AFAICS there is no such
>>> check anywhere.
>
>> Perhaps I'm missing something, but I thought this is checked by these
>> conditions in quals_match_foreign_key():
>
>> 1) with ECs (line 3990)
>
>>     if (foreignrel->relid == var->varno &&
>>         fkinfo->confkeys[i] == var->varattno)
>>         foundvarmask |= 1;
>
> This checks that you found a joinclause mentioning foreignrel.  But
> foreignrel need have nothing to do with the foreign key; it could be any
> table in the query.  That comparison of confkeys[] and varattno is thus
> checking for column-number equality of two columns that might be from
> different relations.  That is, if we have an FK "A.X references B.Y",
> and the query contains "A.X = C.Z", this code could match the FK to that
> clause if Y and Z happen to have the same data types and column numbers.

I don't follow. How could it have 'nothing to do with the foreign key'? 
We're explicitly checking both varno and varattno on both sides of the 
foreign key, and we only consider the column is considered matched if 
both the checks pass.

I've tried to come up with an example triggering the issue, but 
unsuccessfully ...


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
On 06/06/2016 06:34 PM, Tom Lane wrote:
> ... BTW, another thought occurred to me yesterday: it seems like the
> existing code hasn't thought through its behavior for multiple foreign
> keys very carefully.  That is, suppose we have both "A.J references B.K"
> and "A.X references B.Y", as separate FKs not a single multicolumn FK.
> The current code goes to some lengths to decide that one of these is
> better than the other and then ignore the other.  Why?  Seems to me
> that in such a case you want to behave more nearly as you would for a
> multicolumn FK, that is discard all the join quals matched to either FK
> in favor of a single selectivity estimate based on the number of rows in
> the referenced table.  Discarding only the A.J = B.K clause and then
> multiplying by the independent selectivity of A.X = B.Y is surely just as
> wrong as what we've historically done for multicolumn FKs.  (Correcting
> for nulls would take a bit of thought, but I wouldn't be surprised if it
> ends up being the same as for the multicolumn-FK case, at least to within
> the precision we can hope to get with the available stats for nulls.)

Yes, that can be improved. The plan was to improve the common case 
first, and then look at the more complicated cases. It might seem like a 
hand-waving but I'd bet 99% tables are joined on a single FK, so this 
seems like a reasonable approach.

When it comes to improving multiple (multi-column) foreign keys, I think 
it may get way more complicated that it might seem. What if the foreign 
keys overlap, for example? Or what if the keys go in opposite directions 
(cycle). And so on ...


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> On 06/06/2016 06:15 PM, Tom Lane wrote:
>> This checks that you found a joinclause mentioning foreignrel.  But
>> foreignrel need have nothing to do with the foreign key; it could be any
>> table in the query.

> I don't follow. How could it have 'nothing to do with the foreign key'? 

Precisely that: clauselist_join_selectivity iterates over every table in
the join as a potential foreignrel, and you explicitly refuse to check
that that table has anything to do with the foreign key's referenced side.

Here's an example:

drop table if exists t1, t2, t3;
create table t1(f1 int, f2 int, primary key(f1,f2));
insert into t1 select x,x from generate_series(1,100000) x;
create table t2 (f1 int, f2 int, foreign key(f1,f2) references t1);
insert into t2 select (x+10)/10,(x+10)/10 from generate_series(1,100000) x;
create table t3(f1 int, f2 int);
insert into t3 select (x+10)/10,(x+10)/10 from generate_series(1,100000) x;
analyze t1;
analyze t2;
analyze t3;
explain select * from t1 join t2 on t1.f1=t2.f1 and t1.f2=t2.f2;
explain select * from t3 join t2 on t3.f1=t2.f1 and t3.f2=t2.f2;

9.5 estimates the first query as producing 1 row, the second as producing
100 rows.  Both of those estimates suck, of course, but it's what you'd
expect from treating the joinclauses as uncorrelated.  HEAD estimates them
both at 100000 rows, which is correct for the first query but a pure
flight of fancy for the second query.  Tracing through this shows that
it's accepting t2's FK as a reason to make the estimate, even though
t1 doesn't even appear in that query!

If we made the modifications previously discussed to throw away FKs
that don't connect two tables mentioned in the query, this particular
example would stop failing.  But it would only take a three-table
query involving t1, t2, and t3 to bring the bug back to life, whether
or not the join conditions actually match the FK.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> When it comes to improving multiple (multi-column) foreign keys, I think 
> it may get way more complicated that it might seem. What if the foreign 
> keys overlap, for example? Or what if the keys go in opposite directions 
> (cycle). And so on ...

I think you can group all FKs referencing the same table and discard
all their matched join clauses in favor of a single 1/N estimate
(and when I say "discard", that means you don't match those clauses
against later FKs, which should take care of the reciprocal-FK issue).
This is clearly correct if no nulls are involved.  We need to make some
estimate of how much to de-rate that figure for nulls, but I don't see
that it's any harder than what's already required for a single multicol
FK.
        regards, tom lane



Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tomas Vondra
Дата:
On 06/06/2016 07:40 PM, Tom Lane wrote:
> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
>> On 06/06/2016 06:15 PM, Tom Lane wrote:
>>> This checks that you found a joinclause mentioning foreignrel.  But
>>> foreignrel need have nothing to do with the foreign key; it could be any
>>> table in the query.
>
>> I don't follow. How could it have 'nothing to do with the foreign key'?
>
> Precisely that: clauselist_join_selectivity iterates over every table in
> the join as a potential foreignrel, and you explicitly refuse to check
> that that table has anything to do with the foreign key's referenced side.
>
> Here's an example:
>
> drop table if exists t1, t2, t3;
> create table t1(f1 int, f2 int, primary key(f1,f2));
> insert into t1 select x,x from generate_series(1,100000) x;
> create table t2 (f1 int, f2 int, foreign key(f1,f2) references t1);
> insert into t2 select (x+10)/10,(x+10)/10 from generate_series(1,100000) x;
> create table t3(f1 int, f2 int);
> insert into t3 select (x+10)/10,(x+10)/10 from generate_series(1,100000) x;
> analyze t1;
> analyze t2;
> analyze t3;
> explain select * from t1 join t2 on t1.f1=t2.f1 and t1.f2=t2.f2;
> explain select * from t3 join t2 on t3.f1=t2.f1 and t3.f2=t2.f2;
>
> 9.5 estimates the first query as producing 1 row, the second as producing
> 100 rows.  Both of those estimates suck, of course, but it's what you'd
> expect from treating the joinclauses as uncorrelated.  HEAD estimates them
> both at 100000 rows, which is correct for the first query but a pure
> flight of fancy for the second query.  Tracing through this shows that
> it's accepting t2's FK as a reason to make the estimate, even though
> t1 doesn't even appear in that query!

D'oh!

Clearly we need to check confrelid somewhere, not just varno/varattno. I 
think this should do the trick
rte = planner_rt_fetch(var->varno, root);
if (foreignrel->relid == var->varno &&        fkinfo->confrelid == rte->relid &&        fkinfo->confkeys[i] ==
var->varattno)       foundvarmask |= 1;
 

It seems to resolve the the issue (the estimate is now just 100), but 
I'm not going to claim it's 100% correct.

In any case, thanks for point this out.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Noah Misch <noah@leadboat.com> writes:
> On Fri, Jun 03, 2016 at 10:32:24AM -0400, Noah Misch wrote:
>> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long past due
>> for your status update.  Please reacquaint yourself with the policy on open
>> item ownership[1] and then reply immediately.  If I do not hear from you by
>> 2016-06-04 15:00 UTC, I will transfer this item to release management team
>> ownership without further notice.
>> [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com

> This PostgreSQL 9.6 open item now needs a permanent owner.  I want PostgreSQL
> to have this planner functionality, but I cannot both give it the attention it
> needs and meet commitments predating this open item.  Would any other
> committer like to take ownership?  If this role interests you, please read
> this thread and the policy linked above, then send an initial status update
> bearing a date for your subsequent status update.  If the item does not have a
> permanent owner by 2016-06-07 22:00 UTC, I will resolve the item by reverting
> commits 68d704e and 137805f.

The state of play here seems to be that Tomas is willing to have a go at
rewriting the patch per my suggestions, but Simon has not shown any
indications of responding in a timely fashion; and time is now of the
essence.

I am willing to take ownership of this item; but if I do, I will start
by reverting the aforementioned commits and their followups.  I do not
think that very much of what's there now will survive without significant
changes, and to my taste it will be easier to review a rewritten patch
de novo.  If Tomas is able to produce a rewritten patch within a week
(by 6/14), I will undertake to review it with an eye to committing by
the end of next week.  If we are unable to produce something satisfactory
before beta2, the feature needs to be postponed into the next devel cycle.
        regards, tom lane



Re: Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Robert Haas
Дата:
On Tue, Jun 7, 2016 at 10:20 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Noah Misch <noah@leadboat.com> writes:
>> On Fri, Jun 03, 2016 at 10:32:24AM -0400, Noah Misch wrote:
>>> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long past due
>>> for your status update.  Please reacquaint yourself with the policy on open
>>> item ownership[1] and then reply immediately.  If I do not hear from you by
>>> 2016-06-04 15:00 UTC, I will transfer this item to release management team
>>> ownership without further notice.
>>> [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com
>
>> This PostgreSQL 9.6 open item now needs a permanent owner.  I want PostgreSQL
>> to have this planner functionality, but I cannot both give it the attention it
>> needs and meet commitments predating this open item.  Would any other
>> committer like to take ownership?  If this role interests you, please read
>> this thread and the policy linked above, then send an initial status update
>> bearing a date for your subsequent status update.  If the item does not have a
>> permanent owner by 2016-06-07 22:00 UTC, I will resolve the item by reverting
>> commits 68d704e and 137805f.
>
> The state of play here seems to be that Tomas is willing to have a go at
> rewriting the patch per my suggestions, but Simon has not shown any
> indications of responding in a timely fashion; and time is now of the
> essence.
>
> I am willing to take ownership of this item; but if I do, I will start
> by reverting the aforementioned commits and their followups.  I do not
> think that very much of what's there now will survive without significant
> changes, and to my taste it will be easier to review a rewritten patch
> de novo.  If Tomas is able to produce a rewritten patch within a week
> (by 6/14), I will undertake to review it with an eye to committing by
> the end of next week.  If we are unable to produce something satisfactory
> before beta2, the feature needs to be postponed into the next devel cycle.

That sounds pretty fair to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Noah Misch
Дата:
On Tue, Jun 07, 2016 at 10:20:44AM -0400, Tom Lane wrote:
> Noah Misch <noah@leadboat.com> writes:
> > On Fri, Jun 03, 2016 at 10:32:24AM -0400, Noah Misch wrote:
> >> IMMEDIATE ATTENTION REQUIRED.  This PostgreSQL 9.6 open item is long past due
> >> for your status update.  Please reacquaint yourself with the policy on open
> >> item ownership[1] and then reply immediately.  If I do not hear from you by
> >> 2016-06-04 15:00 UTC, I will transfer this item to release management team
> >> ownership without further notice.
> >> [1] http://www.postgresql.org/message-id/20160527025039.GA447393@tornado.leadboat.com
> 
> > This PostgreSQL 9.6 open item now needs a permanent owner.  I want PostgreSQL
> > to have this planner functionality, but I cannot both give it the attention it
> > needs and meet commitments predating this open item.  Would any other
> > committer like to take ownership?  If this role interests you, please read
> > this thread and the policy linked above, then send an initial status update
> > bearing a date for your subsequent status update.  If the item does not have a
> > permanent owner by 2016-06-07 22:00 UTC, I will resolve the item by reverting
> > commits 68d704e and 137805f.
> 
> The state of play here seems to be that Tomas is willing to have a go at
> rewriting the patch per my suggestions, but Simon has not shown any
> indications of responding in a timely fashion; and time is now of the
> essence.
> 
> I am willing to take ownership of this item; but if I do, I will start
> by reverting the aforementioned commits and their followups.  I do not
> think that very much of what's there now will survive without significant
> changes, and to my taste it will be easier to review a rewritten patch
> de novo.  If Tomas is able to produce a rewritten patch within a week
> (by 6/14), I will undertake to review it with an eye to committing by
> the end of next week.  If we are unable to produce something satisfactory
> before beta2, the feature needs to be postponed into the next devel cycle.

Through the lens of open item procedure, I see no defects in your offer of
ownership.  I have updated the open items sheet to reflect you being the new
owner.  Thanks.

My personal opinion is that the community should not undertake a "rewrite" of
a nontrivial feature after freeze.  The fact that a progenitor was present in
the tree at freeze doesn't make the rewrite much less risky than a brand new
feature.  So, I suggest that you instead revert the patches and review that
rewrite for next CommitFest.  Even so, I am okay with your current plan.

Thanks,
nm



Re: Re: pg9.6 segfault using simple query (related to use fk for join estimates)

От
Tom Lane
Дата:
Noah Misch <noah@leadboat.com> writes:
> My personal opinion is that the community should not undertake a "rewrite" of
> a nontrivial feature after freeze.  The fact that a progenitor was present in
> the tree at freeze doesn't make the rewrite much less risky than a brand new
> feature.  So, I suggest that you instead revert the patches and review that
> rewrite for next CommitFest.  Even so, I am okay with your current plan.

TBH, I think the odds are very good that that's how it will end up being;
my standards for committing a large patch a few days before beta2 are
going to be quite high.  But I feel it's only fair to offer Tomas the
chance to get something in this year not next year.  Also, even though
this can be expected to be heavily-rewritten code, the fact that there
was a progenitor makes it less risky than a truly new patch would be.
        regards, tom lane