Обсуждение: Getting time-dependent load statistics

Поиск
Список
Период
Сортировка

Getting time-dependent load statistics

От
Torsten Bronger
Дата:
Hallöchen!

Yesterday I ported a web app to PG.  Every 10 minutes, a cron job
scanned the log files of MySQL and generated a plot showing the
queries/sec for the last 24h.  (Admittedly queries/sec is not the
holy grail of DB statistics.)

But I still like to have something like this.  At the moment I just
do the same with PG's log file, with

    log_statement_stats = on

But to generate these plots is costly (e.g. I don't need all the
lines starting with !), and to interpret them is equally costly.  Do
you have a suggestion for a better approach?

Tschö,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
                   Jabber ID: torsten.bronger@jabber.rwth-aachen.de

Re: Getting time-dependent load statistics

От
Bill Moran
Дата:
In response to Torsten Bronger <bronger@physik.rwth-aachen.de>:

> Hallöchen!
>
> Yesterday I ported a web app to PG.  Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h.  (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this.  At the moment I just
> do the same with PG's log file, with
>
>     log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly.  Do
> you have a suggestion for a better approach?

Turn on stats collection and have a look at the various pg_stat* tables.
They'll have stats that you can quickly access with considerably lower
overhead.

Doing it the way you're doing is driving from Pittsburgh to Maine to
get to Ohio.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: Getting time-dependent load statistics

От
"Joshua D. Drake"
Дата:
On Fri, 2009-02-20 at 17:11 +0100, Torsten Bronger wrote:
> Hallöchen!
>
> Yesterday I ported a web app to PG.  Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h.  (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this.  At the moment I just
> do the same with PG's log file, with
>
>     log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly.  Do
> you have a suggestion for a better approach?
>

Do you want queries, or transactions? If you want transactions you
already have that in pg_stat_database. Just do this every 10 minutes:

psql -U <user> -d <database> -c "select now() as time,sum(xact_commit)
as transactions from pg_stat_Database"

Joshua D. Drake


> Tschö,
> Torsten.
>
> --
> Torsten Bronger, aquisgrana, europa vetus
>                    Jabber ID: torsten.bronger@jabber.rwth-aachen.de
>
>
--
PostgreSQL - XMPP: jdrake@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


Re: Getting time-dependent load statistics

От
Scott Marlowe
Дата:
On Fri, Feb 20, 2009 at 9:11 AM, Torsten Bronger
<bronger@physik.rwth-aachen.de> wrote:
> Hallöchen!
>
> Yesterday I ported a web app to PG.  Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h.  (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this.  At the moment I just
> do the same with PG's log file, with
>
>    log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly.  Do
> you have a suggestion for a better approach?

You can turn on log duration, which will just log the duration of
queries.  That's a handy little metric to have and every so often I
turn it on and chart average query run times etc with the actual
queries.  I also turn on logging long running queries of say 5 or 10
seconds or more.

Re: Getting time-dependent load statistics

От
Torsten Bronger
Дата:
Hallöchen!

Joshua D. Drake writes:

> On Fri, 2009-02-20 at 17:11 +0100, Torsten Bronger wrote:
>
>> Yesterday I ported a web app to PG.  Every 10 minutes, a cron job
>> scanned the log files of MySQL and generated a plot showing the
>> queries/sec for the last 24h.  (Admittedly queries/sec is not the
>> holy grail of DB statistics.)
>>
>> But I still like to have something like this.  [...]
>>
>
> Do you want queries, or transactions? If you want transactions you
> already have that in pg_stat_database. Just do this every 10
> minutes:
>
> psql -U <user> -d <database> -c "select now() as time,sum(xact_commit)
> as transactions from pg_stat_Database"

Well, I'm afraid that transactions are too different from each
other.  Currently, I experiment with

SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='chantal';

not being sure whether this makes sense at all.  ;-)  For exmaple,
does "tup_fetched" imply "tup_returned"?

Tschö,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
                   Jabber ID: torsten.bronger@jabber.rwth-aachen.de

Re: Getting time-dependent load statistics

От
Torsten Bronger
Дата:
Hallöchen!

Torsten Bronger writes:

> [...]  Currently, I experiment with
>
> SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
> tup_deleted FROM pg_stat_database WHERE datname='chantal';

Stangely, the statistics coming out of it are extremely high.  I
just dumped my database with the built-in tool of my web framework
and got approximately 50 times as many row accesses from the command
above as I have objects in my database.  The dump routine of my web
framework may do redundant things but not at this extent ...

Tschö,
Torsten.

--
Torsten Bronger, aquisgrana, europa vetus
                   Jabber ID: torsten.bronger@jabber.rwth-aachen.de