Mike Mascari's idea (er... his assembling of the other ideas) still
sounds like the Best Solution though.
:-)
+ Justin
+++
I like the idea of updating shared memory with the performance
statistics,
current query execution information, etc., providing a function to fetch
those statistics, and perhaps providing a system view (i.e.
pg_performance)
based upon such functions which can be queried by the administrator.
FWIW,
Mike Mascari
mascarm@mascari.com
+++
Bruce Momjian wrote:
>
> > I think Bruce wants per-backend data, and this approach would seem to only
> > get the data for the current backend.
> >
> > Also, I really don't like the proposal to write files to /tmp. If we want a
> > perf tool, then we need to have something like 'top', which will
> > continuously update. With 40 backends, the idea of writing 40 file to /tmp
> > every second seems a little excessive to me.
>
> My idea was to use 'ps' to gather most of the information, and just use
> the internal stats when someone clicked on a backend and wanted more
> information.
>
> --
> Bruce Momjian | http://candle.pha.pa.us
> pgman@candle.pha.pa.us | (610) 853-3000
> + If your life is a hard drive, | 830 Blythe Avenue
> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026