Decibel! <decibel@decibel.org> wrote:
> How hard would it be to dump this information to a table, or some
> other more-permanent form of storage? Of course there would need to
> be some means of cleaning that up over time, but if it's a simple
> table you can DELETE from, we could put the burden on the users to do
> that on occasion (I believe Oracle does something similar). It would
> also be good to periodically save everything to the table so that
> data wasn't completely lost on a crash.
I had tried to use a normal table for store stats information,
but several acrobatic hacks are needed to keep performance.
We need to avoid using normal UPDATEs to increment counters
because it requires row-level exclusive locks and kills concurrency.
My idea was modifying heap tuples directly in pages:
buffer = ReadBuffer(stats_rel, blknum); LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); htup =
PageGetItem(BufferGetPage(buffer),itemid); statobj = ((char *) htup + htup->t_hoff); statobj->calls++;
LockBuffer(buffer,BUFFER_LOCK_UNLOCK);
It seemed to work in my test, but of course it is too dangerous.
(If we had supported ISAM-like storage engine, we might use it here.)
Another idea is saving stats information into a file at the stop of server.
Server crashes are still problem, but we keep statistics after server restart.
However, there is no callback at the end of server for plugins.
_PG_fini is not called at shutdown; it is only called at re-LOAD.
Hmmm... can I use on_shmem_exit() for the purpose?
i.e, on_shmem_exit( save_stats_to_file_if_i_am_postmaster )
> I'm concerned because ISTM that in a high velocity environment you'd
> over-run shared memory pretty quickly if you had a lot of different
> queries you were running.
It might be good to ignore queries using simple protocol, but I'm
not sure how to distinguish them from extended or prepared queries.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center