Re: Monitoring Object access

Поиск
Список
Период
Сортировка
От Craig Ringer
Тема Re: Monitoring Object access
Дата
Msg-id 4C8D855C.2060303@postnewspapers.com.au
обсуждение исходный текст
Ответ на Re: Monitoring Object access  (adi hirschtein <adiso75@gmail.com>)
Ответы Re: Monitoring Object access  (adi hirschtein <adiso75@gmail.com>)
Список pgsql-general
On 09/12/2010 10:02 PM, adi hirschtein wrote:
> Hi Craig,
>
> Thanks a lot for the quick response!
> I'm coming from the Oracle side of the house and In oracle for instance,
> you use shared buffer as well, but you are still able to see which
> session is waiting for which blocks and if one session is doing the
 > "real" I/O then the other one wait on 'wait for other session"
 > event so you are able to know who did the actual I/O

There's nothing like that in PostgreSQL. There's some lock monitoring
support for seeing what transactions hold locks and which other
transactions are waiting on those locks, but AFAIK nothing like that for
I/O. PostgreSQL does have DTrace hooks, so if you're on Solaris or some
BSDs you might be able to use those to get the data you want.

It'd be a pretty significant job to add a decent I/O monitoring system
to PostgreSQL. Personally, if I needed something like that, I'd want to
base it on an existing system-level tracing toolkit like Solaris's
DTrace or Linux's "perf". I'd want to add some additional
instrumentation hooks - some of which already exist in Pg for DTrace -
to permit the tools to beaware of transactions, statements, the current
database, which tables are which, which indexes are associated with
which tables, etc. Then I'd use the data collected by the performance
monitoring tools to report on load associated with particular users,
indexes, tables, queries, etc. That way I'd be able to handle things
like whether a request was satisfied with OS buffer cache or had to go
to real disk, report on disk queue depth, etc as part of the whole
system. It'd be a big job even with the use of existing trace tools to help.

Currently there are some DTrace hooks, but I don't think there's any
kind of integrated toolset like I've described to use the monitoring
hooks plus the existing system hooks to do detailed reporting of
load/user, load/tablespace, etc.

> the reason behind it is that you want to check which objects is being
> heavily hit by which  business processes or users and then tier your
> storage accordingly.

At the moment, all you can really do is turn up the logging levels to
log queries, logins, etc. Then watch pg_stat_activity and use
system-level tools like iostat, vmstat, top, perf/dtrace, etc. If you
see backends that're hogging resources you can look their pid up in
pg_stat_activity or the logs, see what they were doing, and run
controlled tests to see what can be improved.

It's somewhat clumsy, but seems to work pretty well most of the time.

Nobody has stepped up to build a comprehensive tracing and performance
framework - and even if they did, they'd have to make it lightweight
enough that it didn't slow PostgreSQL down when it wasn't in use, show
that it wouldn't add an excessive maintenance burden for the developers,
show that it wouldn't break or produce incorrect results the first time
something changed, etc. The Linux kernel demonstrates just how hard
getting this right can be. So does the amount of effort Sun put in to
DTrace. Sure, PostgreSQL isn't an OS kernel, but it's far from simple.

I guess that's why Oracle charges the big bucks - because of all the
extras they include that round the database out into the kitchen-sink
monster that it is.

> is there any place rather than pg_stat_activity that you think I should
> take a look at?

System-level tools and the postgresql logs, especially after proper
configuration. There are some tools on pgfoundry that help a little with
log analysis.

--
Craig Ringer

В списке pgsql-general по дате отправления:

Предыдущее
От: Yaroslav Tykhiy
Дата:
Сообщение: Schema search path
Следующее
От: Dennis Gearon
Дата:
Сообщение: JSON output