Обсуждение: Performance tuning

Поиск
Список
Период
Сортировка

Performance tuning

От
Jean-Christophe Boggio
Дата:
Hi,

I'm trying to write a log analyzer for postgresql, mainly in order to
detect "expensive" queries and optimize them.

The simple extractor I have come up with works well but it's limited
by PostgreSQL's log system :

StartTransactionCommand
query: select * from identity where login='cat';
ProcessQuery
! system usage stats:
!       0.012632 elapsed 0.010000 user 0.000000 system sec
!       [0.030000 user 0.010000 sys total]
!       0/0 [0/0] filesystem blocks in/out
!       49/55 [309/285] page faults/reclaims, 0 [0] swaps
!       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
!       0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
!       Shared blocks:         21 read,          0 written, buffer hit rate = 93.27%
!       Local  blocks:          0 read,          0 written, buffer hit rate = 0.00%
!       Direct blocks:          0 read,          0 written
CommitTransactionCommand

Would it be possible to have a unique identifier in the "query:" line
and another one at the beginning of the stats ?

Because when you have many queries per second, you get all the lines
mixed up and you don't know which result comes from which query so
the stats are useless.

Maybe the backend's PID if the backend both echoes the query and
displays the stats (but I guess the stats come from postmaster)...

If you can do anything, I'd love to try !

--
Jean-Christophe Boggio
cat@thefreecat.org
Independant Consultant and Developer
Delphi, Linux, Perl, PostgreSQL