On Fri, 2006-10-13 at 12:48, Jim C. Nasby wrote:
> On Thu, Oct 12, 2006 at 05:39:20PM -0500, Scott Marlowe wrote:
> > > > It seems to me the first logical step would be having the ability to
> > > > flip a switch and when the postmaster hits a slow query, it saves both
> > > > the query that ran long, as well as the output of explain or explain
> > > > analyze or some bastardized version missing some of the inner timing
> > > > info. Even just saving the parts of the plan where the planner thought
> > > > it would get 1 row and got instead 350,000 and was using a nested loop
> > > > to join would be VERY useful. I could see something like that
> > > > eventually evolving into a self tuning system.
> > >
> > > Saves it and then... does what? That's the whole key...
> >
> > It's meant as a first step. I could certainly use a daily report on
> > which queries had bad plans so I'd know which ones to investigate
> > without having to run them each myself in explain analyze. Again, my
> > point was to do it incrementally. This is something someone could do
> > now, and someone could build on later.
> >
> > To start with, it does nothing. Just saves it for the DBA to look at.
> > Later, it could feed any number of the different hinting systems people
> > have been proposing.
> >
> > It may well be that by first looking at the data collected from problems
> > queries, the solution for how to adjust the planner becomes more
> > obvious.
>
> Yeah, that would be useful to have. The problem I see is storing that
> info in a format that's actually useful... and I'm thinking that a
> logfile doesn't qualify since you can't really query it.
grep / sed / awk can do amazing things to a text file.
I'd actually recommend URL encoding (or something like that) so they'd
be single lines, then you could grep for certain things and feed the
lines to a simple de-encoder.
We do it with our log files at work and can search through some fairly
large files for the exact entry we need fairly quickly.