Re: Adaptive query optimization

Поиск
Список
Период
Сортировка
От Rafia Sabih
Тема Re: Adaptive query optimization
Дата
Msg-id CA+FpmFf9MNLBUG3LUCLgt3h5BLCZfCNer0cCzceL-1Z3yPkDzA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Adaptive query optimization  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Ответы Re: Adaptive query optimization  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Список pgsql-hackers
On Thu, 13 Jun 2019 at 06:07, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:
>
> On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
> >
> > For example, we might require 1000 samples for a given node (say, scan
> > with some quals), before we start using it to tweak the estimates. Once
> > we get the number of estimates, we can continue collecting more data,
> > and once in a while update the correction. This would require some care,
> > of course, because we need to know what coefficient was used to compute
> > the estimate, but that's solvable by having some sort of epoch.
> >
> > Of course, the question is what number should we use, but overall this
> > would be a much lower-overhead way to do the learning.
> >
> > Unfortunately, the learning as implemented in the patch does not allow
> > this. It pretty much requires dedicated learning phase with generated
> > workload, in a single process.
> >
> > But I think that's solvable, assuming we:
> >
> > 1) Store the data in shared memory, instead of a file. Collect data from
> > all backends, instead of just a single one, etc.
> >
> > 2) Make the decision for individual entries, depending on how many
> > samples we have for it.
> >
> Sounds good. I was trying to think whether we can maintain a running
> coefficient. In that way, we don't have to store the samples. But,
> calculating a running coefficient for more than two variables (with
> some single pass algorithm) seems to be a hard problem. Moreover, it
> can introduce significant misestimation. Your suggested approach works
> better.
>
> > I suggest we try to solve one issue at a time. I agree advising which
> > indexes to create is a very interesting (and valuable) thing, but I see
> > it as an extension of the AQO feature. That is, basic AQO (tweaking row
> > estimates) can work without it.
> >
> +1
>
> > >> Now, if someone uses this same scan in a join, like for example
> > >>
> > >>    SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)
> > >>     WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)
> > >>       AND (t2.x = ? AND t2.y = ?)
> > >>
> > >> then we can still apply the same correction to the t1 scan (I think).
> > >> But then we can also collect data for the t1-t2 join, and compute a
> > >> correction coefficient in a similar way. It requires a bit of care
> > >> because we need to compensate for misestimates of inputs, but I think
> > >> that's doable.
> > >>
> > >That'll be an interesting work. For the above query, we can definitely
> > >calculate the correction coefficient of t1-t2 join given (t1.a = ? AND
> > >t1.b = ? AND t1.c < ?) and
> > >(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can
> > >extrapolate that value for t1-t2 join.
> >
> > I'm not sure I see the problem? Essentially, we need to know the sizes
> > of the join inputs, i.e.
> >
> >     t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)
> >
> >     t2 WHERE (t2.x = ? AND t2.y = ?)
> >
> > (which we know, and we know how to correct the estimate), and then the
> > selectivity of the join condition. Which we also know.
> >
> > Obviously, there's a chance those parts (clauses at the scan / join
> > level) are correlated, which could make this less accurate.
> This is exactly what my concern is. The base predicate selectivities
> of t1 and t2 should have an impact on the calculation of the
> correction coefficient. If those selectivities are low, the
> misestimation (which is actual/estimate) should not affect the t1-t2
> join correction coefficient much.
>
Interesting discussion. Talking of query optimization techniques and
challenges, isn't the biggest challenge there is of selectivity
estimation? Then instead of working on optimizing the process which
has been talked of since long, how about skipping the process
altogether. This reminds of the work I came across sometime back[1].
Basically, the idea is to not spend any energy on estimation the
selectivities rather get on with the execution. Precisely, a set of
plans is kept apriori for different selectivities and at the execution
time it starts with the plans one by one, starting from the lower
selectivity one till the query execution completes. It might sound
like too much work but it isn't, there are some theoretical guarantees
to bound the worst case execution time. The trick is in choosing the
plan-set and switching at the time of execution. Another good point
about this is that it works smoothly for join predicates as well.

Since, we are talking about this problem here, I though it might be a
good idea to shed some light on such an approach and see if there is
some interesting trick we might use.

[1] https://dsl.cds.iisc.ac.in/publications/conference/bouquet.pdf

-- 
Regards,
Rafia Sabih



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: PG 12 draft release notes
Следующее
От: Tomas Vondra
Дата:
Сообщение: Re: Adaptive query optimization