Обсуждение: Profiling custom datatypes

Поиск
Список
Период
Сортировка

Profiling custom datatypes

От
William Harrower
Дата:
Hi,

I'm attempting to profile (the memory usage and CPU time of) some code
I've written as part of a custom datatype. I've attempted to utilise
valgrind and cachegrind, but this doesn't seem to work as expected. The
following is the command used:

valgrind --tool=cachegrind --trace-children=yes ./postgres -D ../data

Running this and then invoking a SQL query that causes my code to
execute doesn't seem to result in any output relating to my datatype,
even though its code is taking the majority of the CPU time.

Does anyone know what I'm doing wrong -- do I have to do something
special for valgrind to inspect shared libraries? I have debug symbols
compiled in everywhere.

Ignoring valgrind specifically, does anyone know of any other tools that
can be used to profile the memory usage and CPU time/load of a custom
datatype library? Recent changes I made to client-side code resulted in
an increase in the size of each instance of the type it uploads to the
database, which, for reasons unknown, has caused the search time (using
a custom 'match' operator) to go through the roof. My suspicions suggest
the cache memory used isn't large enough to contain the entire table
(though perhaps it was before the change) and because of this far more
disk reads are necessary. Hopefully a decent profiler should be able to
make this clear.

Many thanks for any help,
Will.

Re: Profiling custom datatypes

От
Tom Lane
Дата:
William Harrower <wjh105@doc.ic.ac.uk> writes:
> Ignoring valgrind specifically, does anyone know of any other tools that
> can be used to profile the memory usage and CPU time/load of a custom
> datatype library?

oprofile on recent Fedora (and probably other Linux distros) pretty much
"just works" for shared libraries, though it only tells you about CPU
profile not memory usage.  I've never been able to get gprof to do
anything useful with shlibs, on any platform :-(

> Recent changes I made to client-side code resulted in
> an increase in the size of each instance of the type it uploads to the
> database, which, for reasons unknown, has caused the search time (using
> a custom 'match' operator) to go through the roof. My suspicions suggest
> the cache memory used isn't large enough to contain the entire table
> (though perhaps it was before the change) and because of this far more
> disk reads are necessary. Hopefully a decent profiler should be able to
> make this clear.

Surely just watching iostat or vmstat would prove or disprove that
theory.  Keep in mind also that CPU profilers aren't going to tell
you much about I/O costs anyway.

            regards, tom lane