Обсуждение: Inconsistant query plan

Поиск
Список
Период
Сортировка

Inconsistant query plan

От
"Daniel Gish"
Дата:
Hi,
We are running Postgresql 8.1, and getting dramatically inconsistant results
after running VACUUM ANALYZE.  Sometimes after analyzing the database, the
query planner chooses a very efficient plan (15 rows, 4.744 ms), and
sometimes a terrible one (24 rows, 3536.995 ms).  Here's the abbreviated
query:

SELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =
t4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...

In the efficient plan, t2 is joined to t3 & t4 before being joined to t1.
The inefficient plan joins t1 to t2 before joining to the other tables.

We've experimented with different settings, such as shared_buffers &
max_fsm_pages, to no avail.  Anybody have a suggestion for getting the
efficient plan to execute consistantly?  If you'd like to see the actual
query & query plans let me know.

Best Regards,
Dan



Re: Inconsistant query plan

От
Scott Marlowe
Дата:
On Tue, 2006-01-24 at 17:15, Daniel Gish wrote:
> Hi,
> We are running Postgresql 8.1, and getting dramatically inconsistant results
> after running VACUUM ANALYZE.  Sometimes after analyzing the database, the
> query planner chooses a very efficient plan (15 rows, 4.744 ms), and
> sometimes a terrible one (24 rows, 3536.995 ms).  Here's the abbreviated
> query:
>
> SELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =
> t4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...
>
> In the efficient plan, t2 is joined to t3 & t4 before being joined to t1.
> The inefficient plan joins t1 to t2 before joining to the other tables.
>
> We've experimented with different settings, such as shared_buffers &
> max_fsm_pages, to no avail.  Anybody have a suggestion for getting the
> efficient plan to execute consistantly?  If you'd like to see the actual
> query & query plans let me know.

Have you adjusted the stats target for that column?  See \h alter table
in psql for the syntax for that.  Then run analyze again.

Re: Inconsistant query plan

От
Michael Fuhr
Дата:
On Tue, Jan 24, 2006 at 04:15:57PM -0700, Daniel Gish wrote:
> We are running Postgresql 8.1, and getting dramatically inconsistant results
> after running VACUUM ANALYZE.  Sometimes after analyzing the database, the
> query planner chooses a very efficient plan (15 rows, 4.744 ms), and
> sometimes a terrible one (24 rows, 3536.995 ms).  Here's the abbreviated
> query:
>
> SELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =
> t4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...

How abbreviated is that example?  Are you actually joining more
tables than that?  In another recent thread varying plans were
attributed to exceeding geqo_threshold:

http://archives.postgresql.org/pgsql-performance/2006-01/msg00132.php

Does your situation look similar?

--
Michael Fuhr

Re: Inconsistant query plan

От
"Daniel Gish"
Дата:
Hi,
Thanks for your response.  The actual query is below; the joins are only 4
deep.  Adjusting the stats target did help, but not dramatically.


EFFICIENT PLAN:

# explain analyze SELECT ev.eid FROM events ev INNER JOIN (events_join ej
INNER JOIN (groups_join gj INNER JOIN groups g ON gj.gid = g.gid) ON ej.gid
= gj.gid) ON ev.eid = ej.eid  WHERE ev.status > 0 AND ej.type_id = 1 AND
g.deleted = 'f' AND g.deactivated != 't' AND ev.type_id >= 0 AND gj.uid=3
AND ev.timestart BETWEEN '01/23/2006'::timestamp AND '02/23/2006'::timestamp
+ '1 day - 1 minute';

              QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
---------------------------------------
 Nested Loop  (cost=0.00..8370.41 rows=25 width=4) (actual time=4.510..4.510
rows=0 loops=1)
   ->  Nested Loop  (cost=0.00..6124.63 rows=673 width=4) (actual
time=0.132..3.116 rows=92 loops=1)
         ->  Nested Loop  (cost=0.00..70.95 rows=8 width=8) (actual
time=0.080..2.226 rows=19 loops=1)
               ->  Index Scan using groups_join_uid_idx on groups_join gj
(cost=0.00..16.27 rows=11 width=4) (actual time=0.019..0.471 rows=196
loops=1)
                     Index Cond: (uid = 3)
               ->  Index Scan using groups_pkey on groups g
(cost=0.00..4.96 rows=1 width=4) (actual time=0.005..0.006 rows=0 loops=196)
                     Index Cond: ("outer".gid = g.gid)
                     Filter: ((NOT deleted) AND (deactivated <> true))
         ->  Index Scan using events_join_gid_idx on events_join ej
(cost=0.00..752.45 rows=341 width=8) (actual time=0.010..0.027 rows=5
loops=19)
               Index Cond: (ej.gid = "outer".gid)
               Filter: (type_id = 1)
   ->  Index Scan using events_pkey on events ev  (cost=0.00..3.32 rows=1
width=4) (actual time=0.012..0.012 rows=0 loops=92)
         Index Cond: (ev.eid = "outer".eid)
         Filter: ((status > 0) AND (type_id >= 0) AND (timestart >=
'2006-01-23 00:00:00'::timestamp without time zone) AND (timestart <=
'2006-02-23 23:59:00'::timestamp without time zone))
 Total runtime: 4.744 ms
(15 rows)


INEFFICIENT PLAN:

# explain analyze SELECT ev.eid FROM events ev INNER JOIN (events_join ej
INNER JOIN (groups_join gj INNER JOIN groups g ON gj.gid = g.gid) ON ej.gid
= g.gid) ON ev.eid = ej.eid  WHERE ev.status > 0 AND ej.type_id = 1 AND
g.deleted = 'f' AND g.deactivated != 't' AND ev.type_id >= 0 AND gj.uid=3
AND ev.timestart BETWEEN '01/23/2006'::timestamp AND '02/23/2006'::timestamp
+ '1 day - 1 minute';

              QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
---------------------------------------
 Nested Loop  (cost=978.19..37161.81 rows=133 width=4) (actual
time=2511.676..2511.676 rows=0 loops=1)
   ->  Merge Join  (cost=978.19..22854.00 rows=4244 width=4) (actual
time=1718.420..2510.128 rows=92 loops=1)
         Merge Cond: ("outer".gid = "inner".gid)
         ->  Index Scan using events_join_gid_idx on events_join ej
(cost=0.00..23452.59 rows=740598 width=8) (actual time=0.014..1532.447
rows=626651 loops=1)
               Filter: (type_id = 1)
         ->  Sort  (cost=978.19..978.47 rows=113 width=8) (actual
time=2.371..2.540 rows=101 loops=1)
               Sort Key: g.gid
               ->  Nested Loop  (cost=0.00..974.33 rows=113 width=8) (actual
time=0.078..2.305 rows=19 loops=1)
                     ->  Index Scan using groups_join_uid_idx on groups_join
gj  (cost=0.00..182.65 rows=159 width=4) (actual time=0.017..0.485 rows=196
loops=1)
                           Index Cond: (uid = 3)
                     ->  Index Scan using groups_pkey on groups g
(cost=0.00..4.97 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=196)
                           Index Cond: ("outer".gid = g.gid)
                           Filter: ((NOT deleted) AND (deactivated <> true))
   ->  Index Scan using events_pkey on events ev  (cost=0.00..3.36 rows=1
width=4) (actual time=0.013..0.013 rows=0 loops=92)
         Index Cond: (ev.eid = "outer".eid)
         Filter: ((status > 0) AND (type_id >= 0) AND (timestart >=
'2006-01-23 00:00:00'::timestamp without time zone) AND (timestart <=
'2006-02-23 23:59:00'::timestamp without time zone))
 Total runtime: 2511.920 ms
(17 rows)

Regards,
Dan




Re: Inconsistant query plan

От
Alessandro Baretta
Дата:
Daniel Gish wrote:
> Hi,
> Thanks for your response.  The actual query is below; the joins are only 4
> deep.  Adjusting the stats target did help, but not dramatically.
>
>               QUERY PLAN
> ----------------------------------------------------------------------------
> ----------------------------------------------------------------------------
> ---------------------------------------
>  Nested Loop  (cost=978.19..37161.81 rows=133 width=4) (actual
> time=2511.676..2511.676 rows=0 loops=1)
>    ->  Merge Join  (cost=978.19..22854.00 rows=4244 width=4) (actual
> time=1718.420..2510.128 rows=92 loops=1)
 > ...
 >               ->  Nested Loop  (cost=0.00..974.33 rows=113 width=8) (actual
time=0.078..2.305 rows=19 loops=1)

I have a similar problem recently. An importat diagnostic tool for these issues
is the pg_stats view. Let me suggest that you post the relevant lines from
pg_stats, so that with some help you will be able to discover what data advises
the query planner to overestimate the cardinality of some joins and
underestimate others.


Alex


--
*********************************************************************
http://www.barettadeit.com/
Baretta DE&IT
A division of Baretta SRL

tel. +39 02 370 111 55
fax. +39 02 370 111 54

Our technology:

The Application System/Xcaml (AS/Xcaml)
<http://www.asxcaml.org/>

The FreerP Project
<http://www.freerp.org/>

DB responce during DB dump

От
Evgeny Gridasov
Дата:
Hi, everybody!

I experience problems with backing up one of my Postgresql 8.1.2 installations.
The problem is that when I do DB backup, all queries begin to run very slow =(
The database only grows in its size (~20Gb today), and the number of transactions increases every month.
A year ago such slow down was OK, but today it is unacceptable.

I found out that pg_dump dramatically increases hdd I/O and because of this most of all
queries begin to run slower. My application using this DB server is time-critical, so
any kind of slow down is critical.

I've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper,
which runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 > pgsql_dump.bz2
The limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok.

Is there any other solution to avoid this problem?


--
Evgeny Gridasov
Software Engineer
I-Free, Russia

Re: DB responce during DB dump

От
Richard Huxton
Дата:
Evgeny Gridasov wrote:
> Hi, everybody!
>
> I experience problems with backing up one of my Postgresql 8.1.2 installations.
> The problem is that when I do DB backup, all queries begin to run very slow =(
> The database only grows in its size (~20Gb today), and the number of transactions increases every month.
> A year ago such slow down was OK, but today it is unacceptable.
>
> I found out that pg_dump dramatically increases hdd I/O and because of this most of all
> queries begin to run slower. My application using this DB server is time-critical, so
> any kind of slow down is critical.
>
> I've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper,
> which runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 > pgsql_dump.bz2
> The limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok.
>
> Is there any other solution to avoid this problem?

That's an interesting solution, and I'd guess people might like to see
it posted to the list if it's not too big.

Also, there's no reason you have to dump from the same machine, you can
do so over the network which should reduce activity a little bit.

Basically though, it sounds like you either need more disk I/O or a
different approach.

Have you looked into using PITR log-shipping or replication (e.g. slony)
to have an off-machine backup?

--
   Richard Huxton
   Archonet Ltd

Re: DB responce during DB dump

От
Tom Lane
Дата:
Richard Huxton <dev@archonet.com> writes:
> Evgeny Gridasov wrote:
>> I've written a perl script to limit pg_dump output bandwidth,
>> ...
>> Is there any other solution to avoid this problem?

> That's an interesting solution, and I'd guess people might like to see
> it posted to the list if it's not too big.

Years ago there was some experimentation with dump-rate throttling logic
inside pg_dump itself --- there's still a comment about it in pg_dump.c.
The experiment didn't seem very successful, which is why it never got to
be a permanent feature.  I'm curious to know why this perl script is
doing a better job than we were able to do inside pg_dump.

            regards, tom lane

Re: DB responce during DB dump

От
Evgeny Gridasov
Дата:
All I was trying to achieve is to limit I/O rate done by pg_dump.
The script is a very simple pipe rate limitter and nothing more:
it reads input, but outputs data no more than at rate specified.

I guess it helps because even if pg_dump outputs data at 20 mb/sec,
the script won't be able to read it at rate higher than output rate. Pipe
buffer is not infinitive, so pg_dump output rate and hard disk reads become
almost equal the input rate of my perl script.

On Wed, 25 Jan 2006 11:21:58 -0500
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Years ago there was some experimentation with dump-rate throttling logic
> inside pg_dump itself --- there's still a comment about it in pg_dump.c.
> The experiment didn't seem very successful, which is why it never got to
> be a permanent feature.  I'm curious to know why this perl script is
> doing a better job than we were able to do inside pg_dump.

--
Evgeny Gridasov
Software Engineer
I-Free, Russia

Re: DB responce during DB dump

От
Evgeny Gridasov
Дата:
Ok, It's VERY simple =) here:
http://deepcore.i-free.ru/simple_shaper.pl

I could dump it to a spare machine, but I don't have one.
Current DB server is 2xXEON / 4GbRAM / RAID10 (4 SCSI HDD). Performance is excellent, except during backups.

I wanted to set up some kind of replication but it's useless - I don't have a spare machine now, may be in future...


On Wed, 25 Jan 2006 12:44:45 +0000
Richard Huxton <dev@archonet.com> wrote:

>
> That's an interesting solution, and I'd guess people might like to see
> it posted to the list if it's not too big.
>
> Also, there's no reason you have to dump from the same machine, you can
> do so over the network which should reduce activity a little bit.
>
> Basically though, it sounds like you either need more disk I/O or a
> different approach.
>
> Have you looked into using PITR log-shipping or replication (e.g. slony)
> to have an off-machine backup?

--
Evgeny Gridasov
Software Engineer
I-Free, Russia