Re: What about utility to calculate planner cost constants?

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: What about utility to calculate planner cost constants?
Дата
Msg-id 87acowcmut.fsf@stark.xeocode.com
обсуждение исходный текст
Ответ на Re: What about utility to calculate planner cost constants?  (Josh Berkus <josh@agliodbs.com>)
Ответы Re: What about utility to calculate planner cost constants?
Список pgsql-performance
Josh Berkus <josh@agliodbs.com> writes:

> > Otherwise it could just collect statements, run EXPLAIN ANALYZE for all
> > of them and then play with planner cost constants to get the estimated
> > values as close as possible to actual values. Something like Goal Seek
> > in Excel, if you pardon my reference to MS :).
>
> That's not really practical.   There are currently 5 major query tuning
> parameters, not counting the memory adjustments which really can't be left
> out.  You can't realistically test all combinations of 6 variables.

I don't think it would be very hard at all actually.

It's just a linear algebra problem with a bunch of independent variables and a
system of equations. Solving for values for all of them is a straightforward
problem.

Of course in reality these variables aren't actually independent because the
costing model isn't perfect. But that wouldn't be a problem, it would just
reduce the accuracy of the results.

What's needed is for the explain plan to total up the costing penalties
independently. So the result would be something like

1000 * random_page_cost + 101 * sequential_page_cost + 2000 * index_tuple_cost
+ ...

In other words a tuple like <1000,101,2000,...>

And explain analyze would produce the above tuple along with the resulting
time.

Some program would have to gather these values from the log or stats data and
gather them up into a large linear system and solve for values that minimize
the divergence from the observed times.



(costs penalties are currently normalized to sequential_page_cost being 1.
That could be maintained, or it could be changed to be normalized to an
expected 1ms.)

(Also, currently explain analyze has overhead that makes this impractical.
Ideally it could subtract out its overhead so the solutions would be accurate
enough to be useful)

--
greg

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Jim C. Nasby"
Дата:
Сообщение: Re: What needs to be done for real Partitioning?
Следующее
От: Simon Riggs
Дата:
Сообщение: Re: What needs to be done for real Partitioning?