Обсуждение: Analyze tool?

Поиск
Список
Период
Сортировка

Analyze tool?

От
Bjørn T Johansen
Дата:
We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of SQL
statementsand then it will 
recommed indexes that should/could be created to increase speed...
Does there exist a similar tool for PostgreSQL?


Regards,

BTJ

--
-----------------------------------------------------------------------------------------------
Bjørn T Johansen

btj@havleik.no
-----------------------------------------------------------------------------------------------
Someone wrote:
"I understand that if you play a Windows CD backwards you hear strange Satanic messages"
To which someone replied:
"It's even worse than that; play it forwards and it installs Windows"
-----------------------------------------------------------------------------------------------

Re: Analyze tool?

От
Thom Brown
Дата:
2010/10/1 Bjørn T Johansen <btj@havleik.no>:
> We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of
SQLstatements and then it will 
> recommed indexes that should/could be created to increase speed...
> Does there exist a similar tool for PostgreSQL?

You can set log_min_duration_statement to log statements which take
over a certain amount of time, and then use pgFouine to read the log
files and identify the most frequently run queries, and the longest
queries.

You can also use the auto_explain contrib module
(http://www.postgresql.org/docs/9.0/static/auto-explain.html)  to log
the plans of queries which take too long.  However, I don't think
pgFouine can use those outputs.. at least not yet.

But to find out what indexes you'll need, getting used to reading
query plans will help as it will show you more than just where
sequentials scans are taking place.  It will also show you what the
planner believes a query will cost compared to how much it actually
costs, which can provide insight into tables which require vacuuming,
indexes which might need clustering, or table stats which require
modifying to match you data.

There might be a tool out there for PostgreSQL like you describe,
although I'm not personally aware of it.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

Re: Analyze tool?

От
ioguix
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Le 01/10/2010 11:53, Bjørn T Johansen a écrit :
> We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of
SQLstatements and then it will 
> recommed indexes that should/could be created to increase speed...
> Does there exist a similar tool for PostgreSQL?

There some work about this kind of feature but nothing officially
released AFAIK.

See http://www.pgcon.org/2010/schedule/events/233.en.html

>
>
> Regards,
>
> BTJ
>

- --
Jehan-Guillaume (ioguix) de Rorthais
www.dalibo.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkylvzcACgkQxWGfaAgowiIDKgCfXCvYxRC5Dmpbj/VAXmBJ8JxM
wb8Aniq3Aho1WlMZbOA0/sdaWEcSHGcg
=dS1n
-----END PGP SIGNATURE-----

Re: Analyze tool?

От
Bjørn T Johansen
Дата:
On Fri, 1 Oct 2010 11:30:59 +0100
Thom Brown <thom@linux.com> wrote:

> 2010/10/1 Bjørn T Johansen <btj@havleik.no>:
> > We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of
SQLstatements and then it 
> > will recommed indexes that should/could be created to increase speed...
> > Does there exist a similar tool for PostgreSQL?
>
> You can set log_min_duration_statement to log statements which take
> over a certain amount of time, and then use pgFouine to read the log
> files and identify the most frequently run queries, and the longest
> queries.

Sounds like a something that should be tried...

>
> You can also use the auto_explain contrib module
> (http://www.postgresql.org/docs/9.0/static/auto-explain.html)  to log
> the plans of queries which take too long.  However, I don't think
> pgFouine can use those outputs.. at least not yet.

Ok, plan B...

>
> But to find out what indexes you'll need, getting used to reading
> query plans will help as it will show you more than just where
> sequentials scans are taking place.  It will also show you what the
> planner believes a query will cost compared to how much it actually
> costs, which can provide insight into tables which require vacuuming,
> indexes which might need clustering, or table stats which require
> modifying to match you data.

Yes, but it would be nice to be pointed in the right direction first, but it seems like log_min_duration_statement can
beused for that... 
And also running explain involves manually work, would have been nice with some automatic procedure....

Thx... :)

BTJ

Re: Analyze tool?

От
Thom Brown
Дата:
2010/10/1 Bjørn T Johansen <btj@havleik.no>:
> On Fri, 1 Oct 2010 11:30:59 +0100
> Thom Brown <thom@linux.com> wrote:
>
>> 2010/10/1 Bjørn T Johansen <btj@havleik.no>:
>> > We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging
ofSQL statements and then it 
>> > will recommed indexes that should/could be created to increase speed...
>> > Does there exist a similar tool for PostgreSQL?
>>
>> You can set log_min_duration_statement to log statements which take
>> over a certain amount of time, and then use pgFouine to read the log
>> files and identify the most frequently run queries, and the longest
>> queries.
>
> Sounds like a something that should be tried...
>
>>
>> You can also use the auto_explain contrib module
>> (http://www.postgresql.org/docs/9.0/static/auto-explain.html)  to log
>> the plans of queries which take too long.  However, I don't think
>> pgFouine can use those outputs.. at least not yet.
>
> Ok, plan B...
>
>>
>> But to find out what indexes you'll need, getting used to reading
>> query plans will help as it will show you more than just where
>> sequentials scans are taking place.  It will also show you what the
>> planner believes a query will cost compared to how much it actually
>> costs, which can provide insight into tables which require vacuuming,
>> indexes which might need clustering, or table stats which require
>> modifying to match you data.
>
> Yes, but it would be nice to be pointed in the right direction first, but it seems like log_min_duration_statement
canbe used for that... 
> And also running explain involves manually work, would have been nice with some automatic procedure....

I agree.  I believe MSSQL has a similar tool too, so surprised a 3rd
party hasn't already come up with one.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

Re: Analyze tool?

От
Rob Sargent
Дата:
Then to get all statements would one simply set log_min_duration to some
arbitrarily small value?

On 10/01/2010 04:30 AM, Thom Brown wrote:
> 2010/10/1 Bjørn T Johansen <btj@havleik.no>:
>> We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of
SQLstatements and then it will 
>> recommed indexes that should/could be created to increase speed...
>> Does there exist a similar tool for PostgreSQL?
>
> You can set log_min_duration_statement to log statements which take
> over a certain amount of time, and then use pgFouine to read the log
> files and identify the most frequently run queries, and the longest
> queries.
>
> You can also use the auto_explain contrib module
> (http://www.postgresql.org/docs/9.0/static/auto-explain.html)  to log
> the plans of queries which take too long.  However, I don't think
> pgFouine can use those outputs.. at least not yet.
>
> But to find out what indexes you'll need, getting used to reading
> query plans will help as it will show you more than just where
> sequentials scans are taking place.  It will also show you what the
> planner believes a query will cost compared to how much it actually
> costs, which can provide insight into tables which require vacuuming,
> indexes which might need clustering, or table stats which require
> modifying to match you data.
>
> There might be a tool out there for PostgreSQL like you describe,
> although I'm not personally aware of it.
>

Re: Analyze tool?

От
Thom Brown
Дата:
On 1 October 2010 14:43, Rob Sargent <robjsargent@gmail.com> wrote:
> Then to get all statements would one simply set log_min_duration to some
> arbitrarily small value?

Well, 0 will do, although if you're got a busy cluster, this'll slow
it down.  Its default is -1 which means disabled.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

Re: Analyze tool?

От
Robert Gravsjö
Дата:

Rob Sargent skrev 2010-10-01 15.43:
> Then to get all statements would one simply set log_min_duration to some
> arbitrarily small value?

 From default postgresql.conf comments:

-1 is disabled, 0 logs all statements and their durations, > 0 logs only
statements running at least this number of milliseconds

Also see 18.7.2 in docs:
http://www.postgresql.org/docs/current/interactive/runtime-config-logging.html

Regards
/roppert

>
> On 10/01/2010 04:30 AM, Thom Brown wrote:
>> 2010/10/1 Bjørn T Johansen<btj@havleik.no>:
>>> We are using both DB2 and PostgreSQL at work and DB2 has a nice tool, i5 Navigator, where one can enable logging of
SQLstatements and then it will 
>>> recommed indexes that should/could be created to increase speed...
>>> Does there exist a similar tool for PostgreSQL?
>>
>> You can set log_min_duration_statement to log statements which take
>> over a certain amount of time, and then use pgFouine to read the log
>> files and identify the most frequently run queries, and the longest
>> queries.
>>
>> You can also use the auto_explain contrib module
>> (http://www.postgresql.org/docs/9.0/static/auto-explain.html)  to log
>> the plans of queries which take too long.  However, I don't think
>> pgFouine can use those outputs.. at least not yet.
>>
>> But to find out what indexes you'll need, getting used to reading
>> query plans will help as it will show you more than just where
>> sequentials scans are taking place.  It will also show you what the
>> planner believes a query will cost compared to how much it actually
>> costs, which can provide insight into tables which require vacuuming,
>> indexes which might need clustering, or table stats which require
>> modifying to match you data.
>>
>> There might be a tool out there for PostgreSQL like you describe,
>> although I'm not personally aware of it.
>>
>

Re: Analyze tool?

От
bricklen
Дата:
2010/10/1 Thom Brown <thom@linux.com>:
> But to find out what indexes you'll need, getting used to reading
> query plans will help as it will show you more than just where
> sequentials scans are taking place.  It will also show you what the
> planner believes a query will cost compared to how much it actually
> costs, which can provide insight into tables which require vacuuming,
> indexes which might need clustering, or table stats which require
> modifying to match you data.

A couple related tools to help with explain plans are the query
analyzer in pgAdmin
(http://www.pgadmin.org/images/screenshots/pgadmin3_macosx.png), and
online at http://explain.depesz.com/

Re: Analyze tool?

От
Thom Brown
Дата:
2010/10/1 bricklen <bricklen@gmail.com>:
> 2010/10/1 Thom Brown <thom@linux.com>:
>> But to find out what indexes you'll need, getting used to reading
>> query plans will help as it will show you more than just where
>> sequentials scans are taking place.  It will also show you what the
>> planner believes a query will cost compared to how much it actually
>> costs, which can provide insight into tables which require vacuuming,
>> indexes which might need clustering, or table stats which require
>> modifying to match you data.
>
> A couple related tools to help with explain plans are the query
> analyzer in pgAdmin
> (http://www.pgadmin.org/images/screenshots/pgadmin3_macosx.png), and
> online at http://explain.depesz.com/


Believe it or not, I still prefer to read the plain text query plan,
although pgAdmin's graphical representation is quite nice.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935