Interesting - I haven't seen that tool before. I'll have to check it
out when I get a chance. Thanks!
On Wed, 17 Jan 2007 20:32:37 +0100, "Tomas Vondra" <tv@fuzzy.cz> said:
> > That's about 32% dead rows. Might be worth scheduling a vacuum full,
> > but it's not like I was afraid it might be. It looks to me like you
> > could probably use a faster I/O subsystem in that machine though.
> >
> > If the random page cost being lower fixes your issues, then I'd just run
> > with it lower for now. note that while lowering it may fix one query,
> > it may break another. Tuning pgsql, like any database, is as much art
> > as science...
>
> A nice feature of postgresql is the ability to log the 'slow queries'
> (exceeding some time limit) - you can use it to compare the performance
> of various settings. We're using it to 'detect' stupid SQL etc.
>
> Just set it reasonably (the value depends on you), for example we used
> about 500ms originally and after about two months of improvements we
> lowered it to about 100ms.
>
> You can analyze the log by hand, but about a year ago I've written a
> tool to parse it and build a set of HTML reports with an overview and
> details about each query) along with graphs and examples of queries.
>
> You can get it here: http://opensource.pearshealthcyber.cz/
>
> Just beware, it's written in PHP and it definitely is not perfect:
>
> (1) memory requirements (about 4x the size of the log)
> (2) not to fast (about 20mins of P4@3GHz for a 200MB log)
> (3) it requires a certain log format (see the page)
>
> I did some improvements to the script recently, but forgot to upload it.
> I'll do that tomorrow.
>
> Tomas
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org