Re: Adaptive query optimization

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: Adaptive query optimization
Дата
Msg-id 20190613001950.2h6dnjugnu3rlmqh@development
обсуждение исходный текст
Ответ на Re: Adaptive query optimization  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Ответы Re: Adaptive query optimization  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Список pgsql-hackers
On Wed, Jun 12, 2019 at 06:14:41PM +0530, Kuntal Ghosh wrote:
>Hello,
>
>On Wed, Jun 12, 2019 at 5:06 PM Konstantin Knizhnik
><k.knizhnik@postgrespro.ru> wrote:
>> On 12.06.2019 0:43, Tomas Vondra wrote:
>> I don't think "learning phase" is an issue, in fact I think that's
>> something we need to do - it ensures we have enough data to make good
>> decisions.
>>
>> What is wrong with learning phase is that it requires some DBA assistance: somebody should determine when to start
learning,
>> provide relevant workload and determine when learning is finished.
>> One of the most recent trends in DBMSes is autonomous databases with zero administration effort.
>> It is especially important for clouds. And of one the main advantages of AQO is that it allows to optimize queries
withouthuman interaction.
 
>>
>> But unfortunately I really do not know how to avoid learning phase, especially if we what to run queries at
replica.
>>
>Avoiding learning phase in AQO a implementation sounds like an oxymoron. :-)
>Perhaps, you meant how we can minimize the effort in learning phase. A
>learning phase has its own complications - like
>a. deciding the the number of iterations needed to achieve certain
>kind of confidence
>b. which parameters to tune (are the existing parameters enough?)
>c. deciding the cost model
>Coming up answers for these things is pretty hard.
>

I kinda agree with both of you - the learning phase may be a significant
burden. But I don't think we can get rid of it entirely - we simply need
to collect the data to learn from somehow. But we should make it as
unobtrusive and easy to perform as possible.

My plan was to allow continuous learning during regular operation, i.e.
from workload generated by the application. So instead of requiring a
separate learning phase, we'd require a certain number of samples for a
given node, because we start using it to correct estimates.

For example, we might require 1000 samples for a given node (say, scan
with some quals), before we start using it to tweak the estimates. Once
we get the number of estimates, we can continue collecting more data,
and once in a while update the correction. This would require some care,
of course, because we need to know what coefficient was used to compute
the estimate, but that's solvable by having some sort of epoch.

Of course, the question is what number should we use, but overall this
would be a much lower-overhead way to do the learning.

Unfortunately, the learning as implemented in the patch does not allow
this. It pretty much requires dedicated learning phase with generated
workload, in a single process.

But I think that's solvable, assuming we:

1) Store the data in shared memory, instead of a file. Collect data from
all backends, instead of just a single one, etc.

2) Make the decision for individual entries, depending on how many
samples we have for it.

>>
>> I think that depends - some machine learning approaches are not that
>> bad. But I think there's a more serious issue - explainability. We need
>> a solution where we can explain/justify why it makes some decisions. I
>> really don't want a black box that produces numbers that you just need
>> to take at face value.
>>
>> The good thing is that the simpler the method, the less expensive and
>> more explainable it is.
>+1
>
>>
>> I tried to create much simpler version of AQO based on auto_explain extension.
>> This extension provide all necessary infrastructure to analyze statements with long execution time.
>> I have added two new modes to auto_explain:
>> 1. Auto generation of multicolumn statistics for variables using in clauses with large estimation error.
>>
>>
>> Interesting! I probably wouldn't consider this part of adaptive query
>> optimization, but it probably makes sense to make it part of this. I
>> wonder if we might improve this to also suggest "missing" indexes?
>>
>I like this part of the implementation. I also agree that this can be
>used to come up with good hypothetical index suggestions. But, it
>needs some additional algorithms. For example, after analysing a set
>of queries, we can come up with a minimal set of indexes that needs to
>be created to minimize the total cost. I've not checked the internal
>implementation of hypogo. Probably, I should do that.
>

I suggest we try to solve one issue at a time. I agree advising which
indexes to create is a very interesting (and valuable) thing, but I see
it as an extension of the AQO feature. That is, basic AQO (tweaking row
estimates) can work without it.

>>
>> I think that it should be nest step of adaptive query optimization:
>> - autogeneration of indexes
>> - auto adjustment of optimizer cost parameters (cpu cost, random/sequential page access cost,...)
>AFAIK, the need for adjustment of cost parameters are highly dominated
>by solving the selectivity estimation errors. But of course, you can
>argue with that.

That's probably true. But more to the point, it makes little sense to
tune cost parameters until the row estimates are fairly accurate. So I
think we should focus on getting that part working first, and then maybe
look into tuning cost parameters when this part works well enough.

Furthermore, I wonder how would we even tune cost parameters? I mean, it
seems much harder than correcting row estimates, because the feedback
seems much less reliable. For row estimates we know the actual row
count, but for cost parameters we only have the total query runtime.
Which is somehow correlated, but it seems to rather noisy (e.g., due to
sharing resources with other stuff on the same system), and it's unclear
how to map the duration to individual nodes (which may be using very
different costing formulas).

>
>>
>> Right. But I think I might have an idea how to address (some of) this.
>>
>> As I already mentioned, I was experimenting with something similar,
>> maybe two or three years ago (I remember chatting about it with Teodor
>> at pgcon last week). I was facing the same issues, and my approach was
>> based on hooks too.
>>
>> But my idea was to not to track stats for a plan as a whole, but instead
>> decompose it into individual nodes, categoried into three basic groups -
>> scans, joins and aggregations. And then use this extracted information
>> to other plans, with "matching" nodes.
>>
>> For example, let's consider a simple "single-scan" query
>>
>>    SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;
>>
>> Now, if you execute this enought times (say, 100x or 1000x), tracking
>> the estimates and actual row counts, you may then compute the average
>> misestimate (maybe a geometric mean would be more appropriate here?):
>>
>>    AVG(actual/estimate)
>>
>>
>> Certainly stats should be collected for each plan node, not for the whole plan.
>> And it is done now in Oleg's and my implementation.
>> Oleg is using gradient descent method. I first tried to calculate average, but then find out that building something
like"histogram",
 
>> where bin is determined as log10 of estimated number of rows.
>>
>I think maintaining a "histogram" sounds good. I've read a paper
>called "Self-tuning Histograms: Building Histograms Without
>Looking at Data" which tries to do something similar[1].
>

Yeah. As long as we know how to compute the correction coefficient, it
does not matter how exactly we store the data (array of values,
histogram, something else).

But I think we should keep this simple, so the self-tuning histograms
may be an overkill here.

>>
>> and if this is significantly different from 1.0, then we can say there's
>> a systemic misestimate, and we can use this as a correction coefficient
>> when computing the scan estimate. (And we need to be careful about
>> collection new data, because the estimates will include this correction.
>> But that can be done by tracking "epoch" of the plan.)
>>
>> Now, if someone uses this same scan in a join, like for example
>>
>>    SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)
>>     WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)
>>       AND (t2.x = ? AND t2.y = ?)
>>
>> then we can still apply the same correction to the t1 scan (I think).
>> But then we can also collect data for the t1-t2 join, and compute a
>> correction coefficient in a similar way. It requires a bit of care
>> because we need to compensate for misestimates of inputs, but I think
>> that's doable.
>>
>That'll be an interesting work. For the above query, we can definitely
>calculate the correction coefficient of t1-t2 join given (t1.a = ? AND
>t1.b = ? AND t1.c < ?) and
>(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can
>extrapolate that value for t1-t2 join.

I'm not sure I see the problem? Essentially, we need to know the sizes
of the join inputs, i.e.

    t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)

    t2 WHERE (t2.x = ? AND t2.y = ?)

(which we know, and we know how to correct the estimate), and then the
selectivity of the join condition. Which we also know.

Obviously, there's a chance those parts (clauses at the scan / join
level) are correlated, which could make this less accurate. But again,
this is about systemic estimation errors - if all queries are affected
by this, then the correction will reflect that.


regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [PATCH] ruleutils: Fix subqueries with shadowed aliases
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: PG 12 draft release notes