Re: disfavoring unparameterized nested loops

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: disfavoring unparameterized nested loops
Дата
Msg-id CAH2-WzmZr1VM8Cpnog-Kj5Hfcb=0LiSBoHSVPikzEUUNpWXBdg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: disfavoring unparameterized nested loops  (David Rowley <dgrowleyml@gmail.com>)
Список pgsql-hackers
On Tue, Jun 15, 2021 at 6:15 PM David Rowley <dgrowleyml@gmail.com> wrote:
> On Wed, 16 Jun 2021 at 12:11, Peter Geoghegan <pg@bowt.ie> wrote:
> > Whether or not we throw the plan back at the planner or "really change
> > our minds at execution time" seems like a distinction without a
> > difference.
>
> What is "really change our minds at execution time"?  Is that switch
> to another plan without consulting the planner?

I don't know what it means. That was my point -- it all seems like
semantics to me.

The strong separation between plan time and execution time isn't
necessarily a good thing, at least as far as solving some of the
thorniest problems goes. It seems obvious to me that cardinality
estimation is the main problem, and that the most promising solutions
are all fundamentally about using execution time information to change
course. Some problems with planning just can't be solved at plan time
-- no model can ever be smart enough. Better to focus on making query
execution more robust, perhaps by totally changing the plan when it is
clearly wrong. But also by using more techniques that we've
traditionally thought of as execution time techniques (e.g. role
reversal in hash join). The distinction is blurry to me.

There are no doubt practical software engineering issues with this --
separation of concerns and whatnot. But it seems premature to go into
that now.

> The new information might cause the join order to
> completely change. It might not be as simple as swapping a Nested Loop
> for a Hash Join.

I agree that it might not be that simple at all. I think that Robert
is saying that this is one case where it really does appear to be that
simple, and so we really can expect to benefit from a simple plan-time
heuristic that works within the confines of the current model. Why
wouldn't we just take that easy win, once the general idea has been
validated some more? Why let the perfect be the enemy of the good?

I have perhaps muddied the waters by wading into the more general
question of robust execution, the inherent uncertainty with
cardinality estimation, and so on. Robert really didn't seem to be
talking about that at all (though it is clearly related).

> > Either way we're changing our minds about the plan based
> > on information that is fundamentally execution time information, not
> > plan time information. Have I missed something?
>
> I don't really see why you think the number of rows that a given join
> might produce is execution information.

If we're 100% sure a join will produce at least n rows because we
executed it (with the express intention of actually doing real query
processing that returns rows to the client), and it already produced n
rows, then what else could it be called? Why isn't it that simple?

> It's exactly the sort of
> information the planner needs to make a good plan. It's just that
> today we get that information from statistics. Plenty of other DBMSs
> make decisions from sampling.

> FWIW, we do already have a minimalist
> sampling already in get_actual_variable_range().

I know, but that doesn't seem all that related -- it almost seems like
the opposite idea. It isn't the executor balking when it notices that
the plan is visibly wrong during execution, in some important way.
It's more like the planner using the executor to get information about
an index that is well within the scope of what we think of as plan
time.

To some degree the distinction gets really blurred due to nodes like
hash join, where some important individual decisions are delayed until
execution time already. It's really unclear when precisely it stops
being that, and starts being more of a case of either partially or
wholly replanning. I don't know how to talk about it without it being
confusing.

> I'm just trying to highlight that I don't think overloading nodes is a
> good path to go down.  It's not a sustainable practice. It's a path
> towards just having a single node that does everything. If your
> suggestion was not serious then there's no point in discussing it
> further.

As I said, it was a way of framing one particular issue that Robert is
concerned about.

-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Improving isolationtester's data output
Следующее
От:
Дата:
Сообщение: RE: [PATCH] expand the units that pg_size_pretty supports on output