Re: A costing analysis tool

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: A costing analysis tool
Дата
Msg-id s34e6690.082@gwmta.wicourts.gov
обсуждение исходный текст
Ответ на A costing analysis tool  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Ответы Re: A costing analysis tool  (Martijn van Oosterhout <kleptog@svana.org>)
Список pgsql-hackers
Thanks, Josh, for the feedback.

It sounds as though you are more focused on picking up costing
problems which happen during production -- which is clearly
valuable, but addresses a somewhat different set of needs than
I was looking at.  That said, it seems like there is potential to share
signifcant code between the two techniques.  We'll have to see if
we can work that out.

I didn't want to broach the subject of the programming language
for this at the early conceptual stages, but if we're talking about
code sharing, it can't wait too long, so I'll jump in with it now.  I was
considering using python to program the tool I was discussing.  If
python is used, I don't care whether there is any change to EXPLAIN
ANALYZE -- it only takes a few lines of code to pull out what I need
in the current form.  My concern is whether python is supported on
all of the target platforms.  Python does well running queries directly
against PostgreSQL, and is fine for shelling out to run commands
(such as those needed to stop the back end, flush cache, and
start the back end again).  I think I will be significantly more
productive at this in python than if I used C or perl, but if it's not
accessible to the PostgreSQL community as a whole, I'll cope.
Comments, anyone?

Perhaps the place we can share code is starting at the point where
EXPLAIN ANALYZE results have been inserted into a database.
The analysis and reporting from that point might be something which
could be common code.

I'm not yet familiar with DBT-OSDL, Jan-TPCW, OSDBB and eDB,
but I'll look them up -- that exactly the sort of suggestion I was hoping
to get, so that I don't need to start from scratch in generating the test
data.  Anyone want to point out something else I should consider?

I need to have somewhere for the work to live, and I quite frankly
would just as soon dodge the overhead of setting up and maintaining
something, so if noone has objections or other suggestions, I'm
inclined to take you up on your offer to use your testperf project.
Does anyone think some other location would be more appropriate?

How much time is a question I'll have to discuss with my client after
the concept has been firmed up and I work out a design from which
I can estimate.  My off-the-cuff guess is that it will require, and I can
get approval for, about three FTE weeks.  Mixed in with other things
which require my attention, that's probably spread over two to three
calendar months.  If we run into critical optimization problems, this
could get a boost in priority, which would shorten the timeline.  It's
also possible I might have to set it aside to work on some issue
which comes out of the blue -- I never know for sure, so I don't
want anyone to count on this for anything with a hard deliverable
date until we actually have the working tool.

If we get into much more detail, I assume we should take this
off-list.

-Kevin


>>> Josh Berkus <josh@agliodbs.com> 10/13/05 12:25 PM >>>
Kevin,

> I'm looking at trying to fix some clear flaws in costing which cause
> of our real-world queries to choose sub-optimal plans under PostgreSQL.
> It's clear that there needs to be a tool to analyze the accuracy of
> costing for a variety of queries, both to direct any efforts to fix
> problems and to test for possible costing regressions.  As far as I can
> tell, no such tool currently exists.  If I've missed something, please
> let me know, even if it's ad hoc or incomplete.

Actually, this is pretty completely what I've been thinking about for
the last year.   I'm very happy that someone else is interested in
working on it.

> (2)  A large database must be created for these tests, since many
> issues don't show up in small tables.  The same data must be generated
> in every database, so results are comparable and reproducable.
>
> (3)  Developers should be able to easily add test cases, either for
> their own use or contributed to the community.

Sure.  However, I think it's important to seperate the test cases from
the cost collection tool.  Our *best* test cases will be real production
applications.   For synthetic test cases, we can look to improving
DBT-OSDL, Jan-TPCW, OSDBB and eDB's test (if they ever publish it).  The
only thing that mess of tests is lacking is easy setup and portability.


> (7)  I envision a process to create a test database, populate it, run a
> series of test cases with EXPLAIN ANALYZE, capture the results, parse
> the results and store them in a database, analyze the results to find
> means and standard deviations both overall and for each type of plan,
> and report summaries and outliers -- with references to the test cases.
> The primary statistic of interest is actual time divided by cost.  This
> seems like it would be of interest overall, and within the permutations
> mentioned above for a single query.

I would actually like to do this differently.   I think an asynchronous
logging mechanism is more useful, because there are cost estimation
problems which don't show up except under conditions of concurrency and
heavy server load.  For this reason, it's very important that this kind
of cost collection could be performed on a production application.

What that would mean is some process whereby the system could sample,
say, 5% of the queries being run (at random) and run EXPLAIN ANALYZEs
against them, logging the results in a way that could be tabularized.

Speaking of which, I think you're missing an important first step:
tabular output for EXPLAIN ANALYZE.   A whole host of query testing
tools could be developed if it were easy to shove EA results into a
format where statistics could be run on them.  Without it, it's pretty
hard to do the rest of the testing.

> So, what do you think?

How much time do you have to spend on this?

I'd like to offer you the TestPerf project on pgfoundry
(www.pgfoundry.org/projects/testperf) as a container for your work on
this idea.   I also have access to a variety of test machines for
performance tests.

--Josh




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: pg_config --pgxs on Win32
Следующее
От: Martijn van Oosterhout
Дата:
Сообщение: Re: pg_config --pgxs on Win32