Обсуждение: 'Interesting' prepared statement slowdown on large table join

Поиск
Список
Период
Сортировка

'Interesting' prepared statement slowdown on large table join

От
"Prodan, Andrei"
Дата:
  Hello everyone,

  I have the following scenario:
  There's a web service that updates some information in two tables,
every 5 minutes.
  In order to do this it will issue a select on the tables, get some
data, think about it, and then update it if necessary.

  Sometimes - about once every two weeks, I think, it will start using
an extremely inefficient plan where it will loop on many results from
the large table instead of getting the few results from small table and
looping on those.
  The difference in performance is devastating - from 18 ms to 10-20
seconds, and of course drags everything down.
  The situation will usually not resolve itself - but it will resolve
after i run "ANALYZE party; ANALYZE big_table" about... 3-5 times.
Interesting.

  When the problem is occuring, it is completely reproducible using
local psql - thus probably not a connector issue.
  I have tried to reconnect and to re-prepare the statement to allow it
to choose a new plan after the 'first' analyze, but it didn't help.
  I have tried to increase ANALYZE statistics target on party_id (as the
join field) on both tables to 300, but it doesn't appear to help (not
even with the frequency of incidents).


The select is as follows:
prepare ps(varchar,varchar,varchar) as select party.party_id from party,
big_table where external_id = $1 and party.party_id = big_table.party_id
and attr_name = $2 and attr_value = $3;
PREPARE
execute ps('13','GroupId','testshop');
party_id
----------
 659178

The query will always return exactly one row.

I hope this is enough information to start a discussion on how to avoid
this. The only reliable solution we've come up with so far is to split
selects and do the join in Java, but this seems like a very unorthodox
solution and could cause other trouble down the road.

Thank you in advance,
Andrei Prodan
Systems Administator


testdb=# select count(1) from party where external_id='13';
count
-------
     4
(1 row)
testdb=# select count(1) from big_table where attr_name='GroupId';
  count
---------
 1025867
(1 row)

testdb=# select count(1) from big_table where attr_value='testshop';
 count
--------
 917704
(1 row)

Table party:
Rows: 1.8M
Table size: 163 MB
Indexes size: 465 MB

Table big_table:
- Frequently updated
Rows: 7.2M
Table size: 672 MB
Indexes size: 1731 MB

GOOD PLAN:
testdb=# explain analyze execute ps('13','GroupId','testshop');
                                                                QUERY
PLAN

------------------------------------------------------------------------
-----------------------------
--------------------------------------
 Nested Loop  (cost=0.00..19.11 rows=1 width=7) (actual
time=2.662..18.388 rows=1 loops=1)
   ->  Index Scan using partyext_id_idx on party  (cost=0.00..8.47
rows=1 width=7) (actual time=2.439
..2.495 rows=4 loops=1)
         Index Cond: ((external_id)::text = ($1)::text)
   ->  Index Scan using pk_big_table on big_table  (cost=0.00..10.62
rows=1 width=7) (act ual time=3.972..3.972 rows=0 loops=4)
         Index Cond: (((big_table.party_id)::text =
(party.party_id)::text) AND ((party_attribu te.attr_name)::text =
($2)::text))
         Filter: ((big_table.attr_value)::text = ($3)::text)  Total
runtime: 18.484 ms
(7 rows)

BAD PLAN:
testdb=# explain analyze execute ps('13','GroupId','testshop');
                                                                  QUERY
PLAN
------------------------------------------------------------------------
-----------------------------------------------------------------------
Nested Loop  (cost=0.00..56.83 rows=4 width=7) (actual
time=355.569..9989.681 rows=1 loops=1)
   ->  Index Scan using attr_name_value on big_table  (cost=0.00..22.85
rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)
         Index Cond: (((attr_name)::text = ($2)::text) AND
((attr_value)::text = ($3)::text))
   ->  Index Scan using pk_party on party  (cost=0.00..8.48 rows=1
width=7) (actual time=0.010..0.010 rows=0 loops=914786)
         Index Cond: ((party.party_id)::text =
(big_table.party_id)::text)
         Filter: ((party.external_id)::text = ($1)::text) Total runtime:
9989.749 ms
(7 rows)


              name               |
current_setting
---------------------------------+--------------------------------------
------------------------------------------------------------------------
-----
 version                         | PostgreSQL 8.4.4 on
x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red
Hat 4.1.2-48), 64-bit
 autovacuum_analyze_scale_factor | 0.05
 autovacuum_max_workers          | 9
 autovacuum_vacuum_scale_factor  | 0.1
 checkpoint_segments             | 30
 effective_cache_size            | 6GB
 effective_io_concurrency        | 6
 lc_collate                      | en_US.UTF-8
 lc_ctype                        | en_US.UTF-8
 listen_addresses                | *
 log_autovacuum_min_duration     | 1s
 log_checkpoints                 | on
 log_destination                 | stderr
 log_directory                   | /home.san/pg_log
 log_line_prefix                 | %r PID:%p - %t - Tx: %v %l -
 log_lock_waits                  | on
 log_min_duration_statement      | 1s
 logging_collector               | on
 maintenance_work_mem            | 512MB
 max_connections                 | 1000
 max_stack_depth                 | 2MB
 server_encoding                 | UTF8
 shared_buffers                  | 3GB
 TimeZone                        | Europe/Berlin
 vacuum_cost_delay               | 100ms
 vacuum_cost_limit               | 200
 vacuum_cost_page_dirty          | 40
 vacuum_cost_page_miss           | 20
 wal_buffers                     | 2MB
 work_mem                        | 8MB
(30 rows)

CREATE DATABASE testdb
  WITH OWNER = testuser
       ENCODING = 'UTF8'
       TABLESPACE = pg_default
       LC_COLLATE = 'en_US.UTF-8'
       LC_CTYPE = 'en_US.UTF-8'
       CONNECTION LIMIT = -1;

-- Table: party

-- DROP TABLE party;

CREATE TABLE party
(
  party_id character varying(255) NOT NULL,
  party_type_id character varying(20),
  external_id character varying(30),
  preferred_currency_uom_id character varying(20),
  description text,
  status_id character varying(20),
  created_date timestamp with time zone,
  created_by_user_login character varying(255),
  last_modified_date timestamp with time zone,
  last_modified_by_user_login character varying(255),
  last_updated_stamp timestamp with time zone,
  last_updated_tx_stamp timestamp with time zone,
  created_stamp timestamp with time zone,
  created_tx_stamp timestamp with time zone,
  CONSTRAINT pk_party PRIMARY KEY (party_id),
  CONSTRAINT party_cul FOREIGN KEY (created_by_user_login)
      REFERENCES user_login (user_login_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT party_lmcul FOREIGN KEY (last_modified_by_user_login)
      REFERENCES user_login (user_login_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT party_pref_crncy FOREIGN KEY (preferred_currency_uom_id)
      REFERENCES uom (uom_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT party_pty_typ FOREIGN KEY (party_type_id)
      REFERENCES party_type (party_type_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT party_statusitm FOREIGN KEY (status_id)
      REFERENCES status_item (status_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE,
  autovacuum_vacuum_scale_factor=0.002,
  autovacuum_analyze_scale_factor=0.001
);
ALTER TABLE party OWNER TO postgres;
ALTER TABLE party ALTER COLUMN party_id SET STATISTICS 300;


-- Index: mn_party_description

-- DROP INDEX mn_party_description;

CREATE INDEX mn_party_description
  ON party
  USING btree
  (description);

-- Index: party_cul

-- DROP INDEX party_cul;

CREATE INDEX party_cul
  ON party
  USING btree
  (created_by_user_login);

-- Index: party_lmcul

-- DROP INDEX party_lmcul;

CREATE INDEX party_lmcul
  ON party
  USING btree
  (last_modified_by_user_login);

-- Index: party_pref_crncy

-- DROP INDEX party_pref_crncy;

CREATE INDEX party_pref_crncy
  ON party
  USING btree
  (preferred_currency_uom_id);

-- Index: party_pty_typ

-- DROP INDEX party_pty_typ;

CREATE INDEX party_pty_typ
  ON party
  USING btree
  (party_type_id);

-- Index: party_statusitm

-- DROP INDEX party_statusitm;

CREATE INDEX party_statusitm
  ON party
  USING btree
  (status_id);

-- Index: party_txcrts

-- DROP INDEX party_txcrts;

CREATE INDEX party_txcrts
  ON party
  USING btree
  (created_tx_stamp);

-- Index: party_txstmp

-- DROP INDEX party_txstmp;

CREATE INDEX party_txstmp
  ON party
  USING btree
  (last_updated_tx_stamp);

-- Index: partyext_id_idx

-- DROP INDEX partyext_id_idx;

CREATE INDEX partyext_id_idx
  ON party
  USING btree
  (external_id);

-- Index: upper_desc_idx

-- DROP INDEX upper_desc_idx;

CREATE INDEX upper_desc_idx
  ON party
  USING btree
  (upper(btrim(description)));


-- Table: big_table

-- DROP TABLE big_table;

CREATE TABLE big_table
(
  party_id character varying(255) NOT NULL,
  attr_name character varying(60) NOT NULL,
  attr_value character varying(255),
  last_updated_stamp timestamp with time zone,
  last_updated_tx_stamp timestamp with time zone,
  created_stamp timestamp with time zone,
  created_tx_stamp timestamp with time zone,
  CONSTRAINT pk_big_table PRIMARY KEY (party_id, attr_name),
  CONSTRAINT party_attr FOREIGN KEY (party_id)
      REFERENCES party (party_id) MATCH SIMPLE
      ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE,
  autovacuum_vacuum_scale_factor=0.002,
  autovacuum_analyze_scale_factor=0.001
);
ALTER TABLE big_table OWNER TO postgres;
ALTER TABLE big_table ALTER COLUMN party_id SET STATISTICS 300;


-- Index: attr_name_value

-- DROP INDEX attr_name_value;

CREATE INDEX attr_name_value
  ON big_table
  USING btree
  (attr_name, attr_value);

-- Index: party_attr

-- DROP INDEX party_attr;

CREATE INDEX party_attr
  ON big_table
  USING btree
  (party_id);

-- Index: prt_attrbt_txcrts

-- DROP INDEX prt_attrbt_txcrts;

CREATE INDEX prt_attrbt_txcrts
  ON big_table
  USING btree
  (created_tx_stamp);

-- Index: prt_attrbt_txstmp

-- DROP INDEX prt_attrbt_txstmp;

CREATE INDEX prt_attrbt_txstmp
  ON big_table
  USING btree
  (last_updated_tx_stamp);


Re: 'Interesting' prepared statement slowdown on large table join

От
Shaun Thomas
Дата:
On 05/11/2011 06:08 AM, Prodan, Andrei wrote:

> Index Scan using attr_name_value on big_table  (cost=0.00..22.85
> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)

Holy inaccurate statistics, Batman!

Try increasing your statistics target for attr_name and attr_value in
your big table. I know you said you set it to 300 on party_id, but what
happened here is that the optimizer thought this particular name/value
combo in your big table would return less rows, and it was horribly,
horribly wrong.

You might think about bumping up your default_statistics_target anyway
to prevent problems like this in general. But definitely increase it on
those two columns and reanalyze. My guess is that your big_table is big
enough that each analyze gets a different random sample of the various
attr_name and attr_value combinations, so occasionally it'll get too few
and start badly skewing query plans.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@peak6.com

______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

Re: 'Interesting' prepared statement slowdown on large table join

От
Tom Lane
Дата:
Shaun Thomas <sthomas@peak6.com> writes:
> On 05/11/2011 06:08 AM, Prodan, Andrei wrote:
>> Index Scan using attr_name_value on big_table  (cost=0.00..22.85
>> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)

> Holy inaccurate statistics, Batman!

> Try increasing your statistics target for attr_name and attr_value in
> your big table.

Actually, the big problem here is probably not lack of statistics, but
the insistence on using a parameterized prepared plan in the first
place.  If you're going to be doing queries where the number of selected
rows varies that much, using a generic parameterized plan is just a
recipe for shooting yourself in the foot.  The planner cannot know what
the actual search values will be, and thus has no way of adapting the
plan based on how common those search values are.  Having more stats
won't help in that situation.

Forget the prepared plan and just issue the query the old-fashioned way.

I do suspect that the reason the plan is flipping back and forth is
instability of the collected statistics, which might be improved by
increasing the stats target, or then again maybe not.  But that's really
rather irrelevant.

            regards, tom lane

Re: 'Interesting' prepared statement slowdown on large table join

От
Jeff Janes
Дата:
On Wed, May 11, 2011 at 4:08 AM, Prodan, Andrei
<Andrei.Prodan@awinta.com> wrote:
>
...
>
>
> The select is as follows:
> prepare ps(varchar,varchar,varchar) as select party.party_id from party,
> big_table where external_id = $1 and party.party_id = big_table.party_id
> and attr_name = $2 and attr_value = $3;
> PREPARE
> execute ps('13','GroupId','testshop');

>
> BAD PLAN:
> testdb=# explain analyze execute ps('13','GroupId','testshop');
>                                                                  QUERY
...
>   ->  Index Scan using attr_name_value on big_table  (cost=0.00..22.85
> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)
>         Index Cond: (((attr_name)::text = ($2)::text) AND
> ((attr_value)::text = ($3)::text))

So it expects 4 rows and finds 914786, essentially the whole table.
So that is bad.  But what is it thinking during the GOOD PLAN state?

A possible way to get that information is to prepare a simpler
prepared statement that omits the join to party and explain analyze it
with the same params for attr_name and attr_value.  If that gives you
the full table scan rather than index scan, then you can "set
enable_seqscan=off" try to force the index scan.

Cheers,

Jeff

Re: 'Interesting' prepared statement slowdown on large table join

От
"Prodan, Andrei"
Дата:
Thank you for all the leads.
I've increased stats to 1200 on everything obvious (external_id,
attr_name, attr_value, party_id), and ran ANALYZE, but it didn't help at
all - any other ideas of what else could be going wrong ?

We'll disable preparation, but the thing is it works brilliantly 90% of
the time and the other 10% should theoretically be fixable - because
it's almost certainly a border scenario brought on by lack of
maintenance on something somewhere.
Is there any point in trying to rebuild the indexes involved in case
Postgres decided they're too bloated or something like that?

@Shaun: I just finished trying to max out stats and sadly it doesn't
help, thank you very much for trying anyway.

@Tom:
The planner doesn't flip between the plans by itself - it will switch to
the BAD plan at some point and never go back.
The big_table has an extremely uneven distribution indeed. But it still
plans right usually - and this apparently regardless of the statistics
target.

@Jeff: thank you for the clear plan interpretation - but I'm afraid I
don't really understand the second bit:
1) I provided the GOOD plan, so we already know what postgres thinks,
right? (Later edit: guess not. Doesn't work)
2) There's no full table scan in any of the plans - it scans indices,
the problem seems to be that it scans them in the wrong order because it
thinks there are very few WHERE matches in big_table - which is
incorrect, as for that particular pair there is a huge amount of rows.

Thank you,
Andrei

Re: 'Interesting' prepared statement slowdown on large table join

От
Jeff Janes
Дата:
On Thu, May 12, 2011 at 8:53 AM, Prodan, Andrei
<Andrei.Prodan@awinta.com> wrote:
>
> @Jeff: thank you for the clear plan interpretation - but I'm afraid I
> don't really understand the second bit:
> 1) I provided the GOOD plan, so we already know what postgres thinks,
> right? (Later edit: guess not. Doesn't work)
> 2) There's no full table scan in any of the plans - it scans indices,
> the problem seems to be that it scans them in the wrong order because it
> thinks there are very few WHERE matches in big_table - which is
> incorrect, as for that particular pair there is a huge amount of rows.

Hi Andrei,

"Explain analyze" only gives you the cost/rows for the plan components
it actually executed, it doesn't give you costs for alternative
rejected plans.  Since the GOOD PLAN doesn't include the index scan in
question, it doesn't give the estimated or actual rows for that scan
under the stats/conditions that provoke the GOOD PLAN to be adopted.
So to get that information, you have to design an experimental
prepared query that will get executed using that particular scan, that
way it will report the results I wanted to see.  My concern is that
the experimental query I proposed you use might instead decide to use
a full table scan rather than the desired index scan.   Although come
to think of it, I think the same code will be used to arrive at the
predicted number of rows regardless of whether it does a FTS or the
desired index scan.

Cheers,

Jeff