Обсуждение: Query much slower when run from postgres function

Поиск
Список
Период
Сортировка

Query much slower when run from postgres function

От
Mario Splivalo
Дата:
I have a function, looking like this:

CREATE OR REPLACE FUNCTION get_memo_display_queue_size(a_service_id integer)
   RETURNS integer AS
$BODY$
SELECT
    COUNT(*)::integer
FROM
    v_messages_memo
    LEFT JOIN messages_memo_displayed
        ON id = message_id
WHERE
    service_id = $1
    AND state = 1
    AND admin_id IS NULL;
$BODY$
   LANGUAGE 'sql' VOLATILE SECURITY DEFINER
   COST 100;

Now, when I run that function from psql, it takes around 200ms to complete:

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                        QUERY PLAN

----------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=219.728..219.730
rows=1 loops=1)
  Total runtime: 219.758 ms
(2 rows)

pulitzer2=#

And it takes around 200ms each time I run the function!


When I rewrite the query so I can see queryplan, I get this:

create view _v1 as
SELECT
    *
FROM
    v_messages_memo
    LEFT JOIN messages_memo_displayed
        ON id = message_id
WHERE
    state = 1
    AND admin_id IS NULL;

pulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id =
1829;

   QUERY PLAN


------------------------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate  (cost=23506.14..23506.15 rows=1 width=0) (actual
time=6.001..6.002 rows=1 loops=1)
    ->  Nested Loop  (cost=150.69..23505.59 rows=216 width=0) (actual
time=5.744..5.971 rows=13 loops=1)
          ->  Hash Left Join  (cost=150.69..11035.16 rows=2104 width=4)
(actual time=5.721..5.793 rows=13 loops=1)
                Hash Cond: (messages.id =
messages_memo_displayed.message_id)
                Filter: (messages_memo_displayed.admin_id IS NULL)
                ->  Bitmap Heap Scan on messages  (cost=97.03..10955.11
rows=4209 width=4) (actual time=0.042..0.075 rows=13 loops=1)
                      Recheck Cond: (service_id = 1829)
                      ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..95.98 rows=4209 width=0)
(actual time=0.032..0.032 rows=13 loops=1)
                            Index Cond: (service_id = 1829)
                ->  Hash  (cost=28.85..28.85 rows=1985 width=8) (actual
time=5.666..5.666 rows=1985 loops=1)
                      ->  Seq Scan on messages_memo_displayed
(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.697 rows=1985
loops=1)
          ->  Index Scan using messages_memo_pk on messages_memo
(cost=0.00..5.91 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)
                Index Cond: (messages_memo.message_id = messages.id)
                Filter: ((messages_memo.state)::integer = 1)
  Total runtime: 6.079 ms
(15 rows)


So I noticed that postgres is using seq_scan on messages_memo_displayed,
although there is a PK (and an index) on message_id in
messages_memo_displayed (I'll post DDL of the tables at the end of the
post).

So, I tried EXPLAIN ANALYZE after I forced planner not to use sequential
scans:

pulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id =
1829;

               QUERY PLAN


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate  (cost=25403.60..25403.61 rows=1 width=0) (actual
time=6.546..6.547 rows=1 loops=1)
    ->  Nested Loop  (cost=2048.16..25403.06 rows=216 width=0) (actual
time=6.287..6.512 rows=13 loops=1)
          ->  Hash Left Join  (cost=2048.16..12932.63 rows=2104 width=4)
(actual time=6.268..6.340 rows=13 loops=1)
                Hash Cond: (messages.id =
messages_memo_displayed.message_id)
                Filter: (messages_memo_displayed.admin_id IS NULL)
                ->  Bitmap Heap Scan on messages  (cost=97.03..10955.11
rows=4209 width=4) (actual time=0.043..0.078 rows=13 loops=1)
                      Recheck Cond: (service_id = 1829)
                      ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..95.98 rows=4209 width=0)
(actual time=0.032..0.032 rows=13 loops=1)
                            Index Cond: (service_id = 1829)
                ->  Hash  (cost=1926.31..1926.31 rows=1985 width=8)
(actual time=6.211..6.211 rows=1985 loops=1)
                      ->  Index Scan using messages_memo_displayed_pk on
messages_memo_displayed  (cost=0.00..1926.31 rows=1985 width=8) (actual
time=0.069..3.221 rows=1985 loops=1)
          ->  Index Scan using messages_memo_pk on messages_memo
(cost=0.00..5.91 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)
                Index Cond: (messages_memo.message_id = messages.id)
                Filter: ((messages_memo.state)::integer = 1)
  Total runtime: 6.628 ms
(15 rows)


No sequential scan. So I 'changed' my function so that first row says
'SET enable_seqscan TO false'. After that, here are the times for the
function call:

mike@som:~$ psql -U postgres pulitzer2
Welcome to psql 8.3.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
        \h for help with SQL commands
        \? for help with psql commands
        \g or terminate with semicolon to execute query
        \q to quit
pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                        QUERY PLAN

----------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=300.448..300.450
rows=1 loops=1)
  Total runtime: 300.491 ms
(2 rows)

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                      QUERY PLAN

------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=1.940..1.941
rows=1 loops=1)
  Total runtime: 1.961 ms
(2 rows)

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                      QUERY PLAN

------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=1.946..1.947
rows=1 loops=1)
  Total runtime: 1.973 ms
(2 rows)

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                      QUERY PLAN

------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=1.936..1.937
rows=1 loops=1)
  Total runtime: 1.964 ms
(2 rows)

pulitzer2=#

So, first query on the same connection takes 300ms, and any
subsequential query on the same connection takes less than 2 ms. If I
remove 'SET enable_seqscan TO false' from the top of the function, every
call to the function takes around 200-300ms.


Now, as I was explained on pg-jdbc mailinglist, that 'SET enable_seqscan
TO false' affects all queries on that persistent connection from tomcat,
and It's not good solution. So I wanted to post here to ask what other
options do I have.

While writing this I realized that, without forcing sequential scan out,
I get much quicker execution times when I do:

SELECT count(*) FROM _v1 WHERE service_id = 1829

then when I do

SELECT get_memo_display_queue_size(1829),

as seen here:



mike@som:~$ psql -U postgres pulitzer2
Welcome to psql 8.3.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
        \h for help with SQL commands
        \? for help with psql commands
        \g or terminate with semicolon to execute query
        \q to quit

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                        QUERY PLAN

----------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=259.090..259.092
rows=1 loops=1)
  Total runtime: 259.132 ms
(2 rows)

pulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id =
1829;

   QUERY PLAN


------------------------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate  (cost=23517.98..23517.99 rows=1 width=0) (actual
time=5.942..5.943 rows=1 loops=1)
    ->  Nested Loop  (cost=150.70..23517.44 rows=216 width=0) (actual
time=5.674..5.909 rows=13 loops=1)
          ->  Hash Left Join  (cost=150.70..11037.87 rows=2105 width=4)
(actual time=5.633..5.706 rows=13 loops=1)
                Hash Cond: (messages.id =
messages_memo_displayed.message_id)
                Filter: (messages_memo_displayed.admin_id IS NULL)
                ->  Bitmap Heap Scan on messages  (cost=97.04..10957.81
rows=4210 width=4) (actual time=0.032..0.063 rows=13 loops=1)
                      Recheck Cond: (service_id = 1829)
                      ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..95.98 rows=4210 width=0)
(actual time=0.022..0.022 rows=13 loops=1)
                            Index Cond: (service_id = 1829)
                ->  Hash  (cost=28.85..28.85 rows=1985 width=8) (actual
time=5.588..5.588 rows=1985 loops=1)
                      ->  Seq Scan on messages_memo_displayed
(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.690 rows=1985
loops=1)
          ->  Index Scan using messages_memo_pk on messages_memo
(cost=0.00..5.92 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=13)
                Index Cond: (messages_memo.message_id = messages.id)
                Filter: ((messages_memo.state)::integer = 1)
  Total runtime: 6.026 ms
(15 rows)

pulitzer2=# explain analyze select get_memo_display_queue_size(1829);
                                        QUERY PLAN

----------------------------------------------------------------------------------------
  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=211.712..211.714
rows=1 loops=1)
  Total runtime: 211.742 ms
(2 rows)

pulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id =
1829;

   QUERY PLAN


------------------------------------------------------------------------------------------------------------------------------------------------------------
  Aggregate  (cost=23517.98..23517.99 rows=1 width=0) (actual
time=5.918..5.920 rows=1 loops=1)
    ->  Nested Loop  (cost=150.70..23517.44 rows=216 width=0) (actual
time=5.659..5.885 rows=13 loops=1)
          ->  Hash Left Join  (cost=150.70..11037.87 rows=2105 width=4)
(actual time=5.638..5.711 rows=13 loops=1)
                Hash Cond: (messages.id =
messages_memo_displayed.message_id)
                Filter: (messages_memo_displayed.admin_id IS NULL)
                ->  Bitmap Heap Scan on messages  (cost=97.04..10957.81
rows=4210 width=4) (actual time=0.043..0.078 rows=13 loops=1)
                      Recheck Cond: (service_id = 1829)
                      ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..95.98 rows=4210 width=0)
(actual time=0.033..0.033 rows=13 loops=1)
                            Index Cond: (service_id = 1829)
                ->  Hash  (cost=28.85..28.85 rows=1985 width=8) (actual
time=5.581..5.581 rows=1985 loops=1)
                      ->  Seq Scan on messages_memo_displayed
(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.678 rows=1985
loops=1)
          ->  Index Scan using messages_memo_pk on messages_memo
(cost=0.00..5.92 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)
                Index Cond: (messages_memo.message_id = messages.id)
                Filter: ((messages_memo.state)::integer = 1)
  Total runtime: 5.994 ms
(15 rows)

pulitzer2=#



Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?

    Mike

P.S. Here are tables definition, from psql:

pulitzer2=# \d messages
                                                Table "public.messages"
        Column       |           Type           |
        Modifiers
--------------------+--------------------------+---------------------------------------------------------------------
  id                 | integer                  | not null default
nextval(('public.message_id_seq'::text)::regclass)
  from               | character varying(15)    | not null
  to                 | character varying(10)    | not null
  receiving_time     | timestamp with time zone | not null default now()
  raw_text           | character varying        | not null
  keyword            | character varying        |
  destination_id     | integer                  | not null
  vpn_id             | integer                  |
  service_id         | integer                  |
  status             | integer                  | not null default 2
  gateway_message_id | character varying        | not null
  prize_id           | integer                  |
  tan                | character varying        |
Indexes:
     "messages_pk" PRIMARY KEY, btree (id)
     "messages_uq__gateway_message_id" UNIQUE, btree (gateway_message_id)
     "messages_uq__service_id__tan" UNIQUE, btree (service_id, tan)
     "messages_ix_from" btree ("from")
     "messages_ix_receiving_time__service_id__status" btree
(receiving_time, service_id, status)
     "messages_ix_vpn_id" btree (vpn_id)
Foreign-key constraints:
     "messages_fk__destinations_id" FOREIGN KEY (destination_id)
REFERENCES destinations(id)
     "messages_fk__service_prizes_prize_id" FOREIGN KEY (prize_id)
REFERENCES service_prizes(prize_id)
     "messages_fk__services_id" FOREIGN KEY (service_id) REFERENCES
services(id)
     "messages_fk__vpns_id" FOREIGN KEY (vpn_id) REFERENCES vpns(id)


pulitzer2=# \d messages_memo
                  Table "public.messages_memo"
          Column         |           Type           | Modifiers
------------------------+--------------------------+-----------
  message_id             | integer                  | not null
  memo                   | character varying        |
  state                  | dom_messages_memo_state  | not null
  admin_id               | integer                  |
  admin_change_timestamp | timestamp with time zone |
Indexes:
     "messages_memo_pk" PRIMARY KEY, btree (message_id)
Foreign-key constraints:
     "messages_memo_fk__messages_id" FOREIGN KEY (message_id) REFERENCES
messages(id)

pulitzer2=# \d messages_memo_displayed
Table "public.messages_memo_displayed"
    Column   |  Type   | Modifiers
------------+---------+-----------
  message_id | integer | not null
  admin_id   | integer | not null
Indexes:
     "messages_memo_displayed_pk" PRIMARY KEY, btree (message_id, admin_id)
Foreign-key constraints:
     "messages_memo_displayed_fk__admins_id" FOREIGN KEY (admin_id)
REFERENCES admins(id)
     "messages_memo_displayed_fk__messages_id" FOREIGN KEY (message_id)
REFERENCES messages(id)

pulitzer2=# \d v_messages_memo
                View "public.v_messages_memo"
        Column       |           Type           | Modifiers
--------------------+--------------------------+-----------
  id                 | integer                  |
  from               | character varying(15)    |
  to                 | character varying(10)    |
  receiving_time     | timestamp with time zone |
  raw_text           | character varying        |
  keyword            | character varying        |
  destination_id     | integer                  |
  vpn_id             | integer                  |
  service_id         | integer                  |
  status             | integer                  |
  gateway_message_id | character varying        |
  prize_id           | integer                  |
  tan                | character varying        |
  memo               | character varying        |
  state              | dom_messages_memo_state  |
  displayed          | boolean                  |
View definition:
  SELECT v_messages_full.id, v_messages_full."from",
v_messages_full."to", v_messages_full.receiving_time,
v_messages_full.raw_text, v_messages_full.keyword,
v_messages_full.destination_id, v_messages_full.vpn_id,
v_messages_full.service_id, v_messages_full.status,
v_messages_full.gateway_message_id, v_messages_full.prize_id,
v_messages_full.tan, messages_memo.memo, messages_memo.state,
NULL::boolean AS displayed
    FROM messages v_messages_full
    JOIN messages_memo ON v_messages_full.id = messages_memo.message_id;

pulitzer2=#



Re: Query much slower when run from postgres function

От
Tom Lane
Дата:
Mario Splivalo <mario.splivalo@megafon.hr> writes:
> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?

Usually the reason for this is that the planner chooses a different plan
when it has knowledge of the particular value you are searching for than
when it does not.  I suppose 'service_id' has a very skewed distribution
and you are looking for an uncommon value?

            regards, tom lane

Re: Query much slower when run from postgres function

От
Guillaume Cottenceau
Дата:
Tom Lane <tgl 'at' sss.pgh.pa.us> writes:

> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>
> Usually the reason for this is that the planner chooses a different plan
> when it has knowledge of the particular value you are searching for than
> when it does not.

Yes, and since Mario is coming from JDBC, I'll share my part on
this: I also noticed some very wrong plans in JDBC because of the
"optimization" in prepared statements consisting of planning once
for all runs, e.g. without any parameter values to help planning.

My understanding is that practically, it's difficult for the
planner to opt for an index (or not) because the selectivity of a
parameter value may be much different when the actual value
changes.

Normally, the planner "thinks" that planning is so costly that
it's better to plan once for all runs, but practically for our
use, this is very wrong (it may be very good for some uses,
though it would be interesting to know the actual uses share).

Until it's possible to specifically tell the JDBC driver (and/or
PG?) to not plan once for all runs (or is there something better
to think of?), or the whole thing would be more clever (off the
top of my head, PG could try to replan with the first actual
values - or first xx actual values - and if the plan is
different, then flag that prepared statement for replanning each
time if the overall time estimate is different enough), I've
opted to tell the JDBC driver to use the protocol version 2, as
prepared statements were not so much prepared back then (IIRC
parameter interpolation is performed in driver and the whole SQL
query is passed each time, parsed, and planned) using
protocolVersion=2 in the JDBC URL. So far it worked very well for
us.

--
Guillaume Cottenceau

Re: [JDBC] Query much slower when run from postgres function

От
Guillaume Smet
Дата:
On Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau <gc@mnc.ch> wrote:
> Until it's possible to specifically tell the JDBC driver (and/or
> PG?) to not plan once for all runs (or is there something better
> to think of?), or the whole thing would be more clever (off the
> top of my head, PG could try to replan with the first actual
> values - or first xx actual values - and if the plan is
> different, then flag that prepared statement for replanning each
> time if the overall time estimate is different enough), I've
> opted to tell the JDBC driver to use the protocol version 2, as
> prepared statements were not so much prepared back then (IIRC
> parameter interpolation is performed in driver and the whole SQL
> query is passed each time, parsed, and planned) using
> protocolVersion=2 in the JDBC URL. So far it worked very well for
> us.

Unnamed prepared statements are planned after binding the values,
starting with 8.3, or more precisely starting with 8.3.2 as early 8.3
versions were partially broken on this behalf.

It's not always possible to use protocol version 2 as it's quite
limited (especially considering the exceptions returned).

--
Guillaume

Re: [JDBC] Query much slower when run from postgres function

От
Tom Lane
Дата:
Guillaume Smet <guillaume.smet@gmail.com> writes:
> Unnamed prepared statements are planned after binding the values,
> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3
> versions were partially broken on this behalf.

No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be
broken...).  The thing I'm not too clear about is what "use of an
unnamed statement" translates to for a JDBC user.

            regards, tom lane

Re: [JDBC] Query much slower when run from postgres function

От
Andreas Wenk
Дата:
Tom Lane schrieb:
> Guillaume Smet <guillaume.smet@gmail.com> writes:
>> Unnamed prepared statements are planned after binding the values,
>> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3
>> versions were partially broken on this behalf.
>
> No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be
> broken...).  The thing I'm not too clear about is what "use of an
> unnamed statement" translates to for a JDBC user.
>
>             regards, tom lane
>
I followed another post in the PHP List. Andrew McMillan was talking
about his experiences with udf's in Oracle and PG (--> look for subject:
Re: [PHP] pl/php for windows). He was writing that, by using udf's, the
planner sometimes uses strange and not performant plans. So generally I
understood that using udf's is a good idea - compared with the work I
have to do when I code that e.g in PHP and also compared to the better
resulting performance with udf's. So what is your experience with using
udf's (plpgsql)? Is there something like "use it in this case but not in
that case"?

Your answers are very welcome ...

Cheers

Andy



Re: Query much slower when run from postgres function

От
Mario Splivalo
Дата:
Tom Lane wrote:
> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>
> Usually the reason for this is that the planner chooses a different plan
> when it has knowledge of the particular value you are searching for than
> when it does not.  I suppose 'service_id' has a very skewed distribution
> and you are looking for an uncommon value?

I don't think so. Here is distribution for the messages_memo_displayed
table (joined with messages, just to show how many messages of each
service_id are there in messages_memo_displayed):

pulitzer2=# select service_id, count(*) from messages join
messages_memo_displayed on id = message_id group by service_id order by
service_id;
 service_id | count
------------+-------
        504 |     2
       1790 |  1922
       1814 |     1
       1816 |    57
       1818 |     3
(5 rows)

And the sizes of other tables involved:

pulitzer2=# select count(*) from messages_memo_displayed;
 count
-------
  1985
(1 row)

Time: 0.602 ms
pulitzer2=#

pulitzer2=# select count(*) from messages;
  count
---------
 1096388
(1 row)

Time: 345.267 ms
pulitzer2=# select count(*) from messages_memo;
 count
--------
 776238
(1 row)

Time: 133.942 ms
pulitzer2=#


As I've mentioned earlier, I have created an view, for the sake of this
posting:

CREATE OR REPLACE VIEW _v1 AS
 SELECT messages.id, messages."from", messages."to",
messages.receiving_time, messages.raw_text, messages.keyword,
messages.destination_id, messages.vpn_id, messages.service_id,
messages.status, messages.gateway_message_id, messages.prize_id,
messages.tan, messages_memo.memo, messages_memo.state,
messages_memo.displayed, messages_memo_displayed.admin_id
   FROM messages
   JOIN messages_memo ON messages.id = messages_memo.message_id
   LEFT JOIN messages_memo_displayed ON messages.id =
messages_memo_displayed.message_id
  WHERE messages_memo.state::integer = 1 AND
messages_memo_displayed.admin_id IS NULL;

And then I created a function:

CREATE OR REPLACE FUNCTION
__new__get_memo_display_queue_size(a_service_id integer)
  RETURNS integer AS
$BODY$
SELECT
    COUNT(*)::int4
FROM
    _v1
WHERE
    service_id = $1
$BODY$
  LANGUAGE 'sql' VOLATILE SECURITY DEFINER;


Now, here are the differences:
pulitzer2=# select count(*) from _v1 where service_id = 504;
 count
-------
     0
(1 row)

Time: 6.101 ms
pulitzer2=# select __new__get_memo_display_queue_size(504);
 __new__get_memo_display_queue_size
------------------------------------
                                  0
(1 row)

Time: 322.555 ms
pulitzer2=# select count(*) from _v1 where service_id = 1790;
 count
-------
     1
(1 row)

Time: 25.203 ms
pulitzer2=# select __new__get_memo_display_queue_size(1790);
 __new__get_memo_display_queue_size
------------------------------------
                                  1
(1 row)

Time: 225.763 ms
pulitzer2=# select count(*) from _v1 where service_id = 1814;
 count
-------
     2
(1 row)

Time: 13.662 ms
pulitzer2=# select __new__get_memo_display_queue_size(1814);
 __new__get_memo_display_queue_size
------------------------------------
                                  2
(1 row)

Time: 215.251 ms
pulitzer2=# select count(*) from _v1 where service_id = 1816;
 count
-------
     1
(1 row)

Time: 10.111 ms
pulitzer2=# select __new__get_memo_display_queue_size(1816);
 __new__get_memo_display_queue_size
------------------------------------
                                  1
(1 row)

Time: 220.457 ms
pulitzer2=# select count(*) from _v1 where service_id = 1829;
 count
-------
    13
(1 row)

Time: 2.023 ms
pulitzer2=# select __new__get_memo_display_queue_size(1829);
 __new__get_memo_display_queue_size
------------------------------------
                                 13
(1 row)

Time: 221.956 ms
pulitzer2=#


Is this difference normal? I tend to have the interface between the
database and the application trough functions, and I'd like not to
include 'SELECT COUNT(*)...' in my Java code (at least, if I don't have
to! - esp. because I'm not Java developer on the project).

Then, this is also interesting, I think! I'm telling the planer never to
use sequential scan:

pulitzer2=# set enable_seqscan to false;
SET
Time: 0.150 ms
pulitzer2=# select __new__get_memo_display_queue_size(1829);
 __new__get_memo_display_queue_size
------------------------------------
                                 13
(1 row)

Time: 2.412 ms
pulitzer2=# select count(*) from _v1 where service_id = 1829;
 count
-------
    13
(1 row)

Time: 2.092 ms
pulitzer2=# select __new__get_memo_display_queue_size(1816);
 __new__get_memo_display_queue_size
------------------------------------
                                  1
(1 row)

Time: 2.473 ms
pulitzer2=# select count(*) from _v1 where service_id = 1816;
 count
-------
     1
(1 row)

Time: 2.117 ms
pulitzer2=#


Now the the execution times are almost the same.

So, why this difference in the first place, and, what should I do to
have satisfying results when calling a postgres function?
I could rewrite the function from plain sql to plpgsql, and add 'SET
enable_seqscan TO false' before getting the count, and add 'SET
enable_seqscan TO true' after getting the count, but as I was explained
on pg-jdbc mailinglist that is not the way to go.

And I still don't understand why do I have excellent times when I force
planner not to use sequential scan inside the function, but when
'calling' the query from plain sql (SELECT COUNT(*) FROM _v1 WHERE),
execution time is always around 2-4ms, regardles of the value of
enable_seqscan parametar.

    Mike

Re: Query much slower when run from postgres function

От
Mario Splivalo
Дата:
Guillaume Cottenceau wrote:
>>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>> Usually the reason for this is that the planner chooses a different plan
>> when it has knowledge of the particular value you are searching for than
>> when it does not.
>
> Yes, and since Mario is coming from JDBC, I'll share my part on
> this: I also noticed some very wrong plans in JDBC because of the
> "optimization" in prepared statements consisting of planning once
> for all runs, e.g. without any parameter values to help planning.
>

For what is worth:

When I call postgres function via JDBC, I have almost the same execution
time as when calling function from psql.

When I call SELECT COUNT(*)... WHERE... query from JDBC, I again have
almost the same execution time as when executing query from psql.

Postgres function takes around 200ms, and SELECT query takes around 2-4ms.

    Mike

Re: Query much slower when run from postgres function

От
Tom Lane
Дата:
Mario Splivalo <mario.splivalo@megafon.hr> writes:
> Is this difference normal?

It's hard to tell, because you aren't comparing apples to apples.
Try a prepared statement, like

prepare foo(int) as
SELECT
    COUNT(*)::int4
FROM
    _v1
WHERE
    service_id = $1
;

execute foo(504);

which should produce results similar to the function.  You could
then use "explain analyze execute" to probe further into what's
happening.

            regards, tom lane

Re: [JDBC] Query much slower when run from postgres function

От
Dave Cramer
Дата:


On Mon, Mar 9, 2009 at 1:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Guillaume Smet <guillaume.smet@gmail.com> writes:
> Unnamed prepared statements are planned after binding the values,
> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3
> versions were partially broken on this behalf.

No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be
broken...).  The thing I'm not too clear about is what "use of an
unnamed statement" translates to for a JDBC user.

Tom,

The driver will use unnamed statements for all statements until it sees the same statement N times where N is 5 I believe, after that it uses a named statement.

Dave

Re: [JDBC] Query much slower when run from postgres function

От
James Mansion
Дата:
>
> The driver will use unnamed statements for all statements until it
> sees the same statement N times where N is 5 I believe, after that it
> uses a named statement.
>
>
Shame there's no syntax for it to pass the a table of the parameters to
the server when it creates the named statement as planner hints.

James



Re: [JDBC] Query much slower when run from postgres function

От
Scott Carey
Дата:

  1. And how do you do that from JDBC?  There is no standard concept of ‘unnamed’ prepared statements in most database APIs, and if there were the behavior would be db specific.  Telling PG to plan after binding should be more flexible than unnamed prepared statements — or at least more transparent to standard APIs.  E.g. SET plan_prepared_postbind=’true’.
  2. How do you use those on a granularity other than global from jdbc?    ( — I tried setting max_prepared_transactions to 0 but this didn’t seem to work either, and it would be global if it did).  Prepared statements are still great for things like selecting off a primary key, or especially inserts.  Controls at the connection or user level would be significantly more valuable than global ones.
  3. Is it possible to test them from psql? (documentation is weak, PREPARE requires a name, functions require names, etc .. C api has docs but that’s not of use for most).

I’d love to know if there were answers to the above that were workable.

In the end, we had to write our own client side code to deal with sql injection safely and avoid jdbc prepared statements to get acceptable performance in many cases (all cases involving partitioned tables, a few others).  At least dollar-quotes are powerful and useful for dealing with this.  Since the most important benefit of prepared statements is code clarity and sql injection protection, its sad to see weakness in control/configuration over prepared statement behavior at the parse/plan level get in the way of using them for those benefits.  



On 3/9/09 9:04 AM, "Guillaume Smet" <guillaume.smet@gmail.com> wrote:

On Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau <gc@mnc.ch> wrote:
> Until it's possible to specifically tell the JDBC driver (and/or
> PG?) to not plan once for all runs (or is there something better
> to think of?), or the whole thing would be more clever (off the
> top of my head, PG could try to replan with the first actual
> values - or first xx actual values - and if the plan is
> different, then flag that prepared statement for replanning each
> time if the overall time estimate is different enough), I've
> opted to tell the JDBC driver to use the protocol version 2, as
> prepared statements were not so much prepared back then (IIRC
> parameter interpolation is performed in driver and the whole SQL
> query is passed each time, parsed, and planned) using
> protocolVersion=2 in the JDBC URL. So far it worked very well for
> us.

Unnamed prepared statements are planned after binding the values,
starting with 8.3, or more precisely starting with 8.3.2 as early 8.3
versions were partially broken on this behalf.

It's not always possible to use protocol version 2 as it's quite
limited (especially considering the exceptions returned).

--
Guillaume

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Query much slower when run from postgres function

От
Mario Splivalo
Дата:
Tom Lane wrote:
> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> Is this difference normal?
>
> It's hard to tell, because you aren't comparing apples to apples.
> Try a prepared statement, like
[...cut...]
> which should produce results similar to the function.  You could
> then use "explain analyze execute" to probe further into what's
> happening.

Huh, thnx! :) This got me even more confused:

pulitzer2=# prepare foo(int) as select count(*) from _v1 where
service_id = $1;
PREPARE
Time: 4.425 ms
pulitzer2=# execute foo(1816);
 count
-------
     1
(1 row)

Time: 248.301 ms
pulitzer2=# select __new__get_memo_display_queue_size(1816);
 __new__get_memo_display_queue_size
------------------------------------
                                  1
(1 row)

Time: 218.914 ms
pulitzer2=#

So, it is the same. When I do EXPLAIN ANALYZE EXECUTE I get completely
different execution plan:

pulitzer2=# explain analyze execute foo(1816);

     QUERY PLAN


--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=40713.22..40713.23 rows=1 width=0) (actual
time=475.649..475.650 rows=1 loops=1)
   ->  Hash Join  (cost=21406.91..40711.65 rows=626 width=0) (actual
time=183.004..475.629 rows=1 loops=1)
         Hash Cond: (messages_memo.message_id = messages.id)
         ->  Seq Scan on messages_memo  (cost=0.00..18630.83 rows=106825
width=4) (actual time=0.083..324.607 rows=107608 loops=1)
               Filter: ((state)::integer = 1)
         ->  Hash  (cost=21326.61..21326.61 rows=6424 width=4) (actual
time=5.868..5.868 rows=5 loops=1)
               ->  Hash Left Join  (cost=341.64..21326.61 rows=6424
width=4) (actual time=5.650..5.855 rows=5 loops=1)
                     Hash Cond: (messages.id =
messages_memo_displayed.message_id)
                     Filter: (messages_memo_displayed.admin_id IS NULL)
                     ->  Bitmap Heap Scan on messages
(cost=287.98..21192.42 rows=12848 width=4) (actual time=0.049..0.169
rows=62 loops=1)
                           Recheck Cond: (service_id = $1)
                           ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..284.77 rows=12848 width=0)
(actual time=0.038..0.038 rows=62 loops=1)
                                 Index Cond: (service_id = $1)
                     ->  Hash  (cost=28.85..28.85 rows=1985 width=8)
(actual time=5.564..5.564 rows=1985 loops=1)
                           ->  Seq Scan on messages_memo_displayed
(cost=0.00..28.85 rows=1985 width=8) (actual time=0.008..2.674 rows=1985
loops=1)
 Total runtime: 475.761 ms
(16 rows)

Time: 476.280 ms
pulitzer2=#


There is a sequential scan on messages_memo, a scan that doesn't show up
when I just do 'SELECT COUNT(*)...'.

When I do 'set enable_seqscan to false' before i do PREPARE, here is the
execution plan:

pulitzer2=# explain analyze execute foo(1816);

             QUERY PLAN


-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=55624.91..55624.92 rows=1 width=0) (actual
time=7.122..7.123 rows=1 loops=1)
   ->  Nested Loop  (cost=2239.11..55623.34 rows=626 width=0) (actual
time=7.098..7.108 rows=1 loops=1)
         ->  Hash Left Join  (cost=2239.11..23224.07 rows=6424 width=4)
(actual time=6.663..6.962 rows=5 loops=1)
               Hash Cond: (messages.id = messages_memo_displayed.message_id)
               Filter: (messages_memo_displayed.admin_id IS NULL)
               ->  Bitmap Heap Scan on messages  (cost=287.98..21192.42
rows=12848 width=4) (actual time=0.138..0.373 rows=62 loops=1)
                     Recheck Cond: (service_id = $1)
                     ->  Bitmap Index Scan on
messages_uq__service_id__tan  (cost=0.00..284.77 rows=12848 width=0)
(actual time=0.121..0.121 rows=62 loops=1)
                           Index Cond: (service_id = $1)
               ->  Hash  (cost=1926.31..1926.31 rows=1985 width=8)
(actual time=6.430..6.430 rows=1985 loops=1)
                     ->  Index Scan using messages_memo_displayed_pk on
messages_memo_displayed  (cost=0.00..1926.31 rows=1985 width=8) (actual
time=0.063..3.320 rows=1985 loops=1)
         ->  Index Scan using messages_memo_pk on messages_memo
(cost=0.00..5.03 rows=1 width=4) (actual time=0.025..0.025 rows=0 loops=5)
               Index Cond: (messages_memo.message_id = messages.id)
               Filter: ((messages_memo.state)::integer = 1)
 Total runtime: 7.260 ms
(15 rows)

Time: 7.786 ms

I have no idea why postgres chooses sequential scan over messages_memo
when I PREPARE the select. For now I'll go with plpgsql function
(instead of sql function) because I can do 'set enable_seqscan to true'
just before RETURNing the value. That way, when I call the function via
JDBC I have short execution times.

    Mike



Re: Query much slower when run from postgres function

От
Tom Lane
Дата:
Mario Splivalo <mario.splivalo@megafon.hr> writes:
> So, it is the same. When I do EXPLAIN ANALYZE EXECUTE I get completely
> different execution plan:
> ...
>                      ->  Bitmap Heap Scan on messages
> (cost=287.98..21192.42 rows=12848 width=4) (actual time=0.049..0.169
> rows=62 loops=1)
>                            Recheck Cond: (service_id = $1)
>                            ->  Bitmap Index Scan on
> messages_uq__service_id__tan  (cost=0.00..284.77 rows=12848 width=0)
> (actual time=0.038..0.038 rows=62 loops=1)
>                                  Index Cond: (service_id = $1)

Well, there's the problem: without knowing what the specific service_id
is, the planner is estimating 12848 matching rows, which is evidently
off by a couple of orders of magnitude.  And that's pushing it to adopt
a hash join instead of a nestloop.  Are you sure the stats on this
table are up to date?  Maybe you need to increase the stats target?

            regards, tom lane

Re: Query much slower when run from postgres function

От
fche@redhat.com (Frank Ch. Eigler)
Дата:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Mario Splivalo <mario.splivalo@megafon.hr> writes:
>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
>
> Usually the reason for this is that the planner chooses a different plan
> when it has knowledge of the particular value you are searching for than
> when it does not.  I suppose 'service_id' has a very skewed distribution
> and you are looking for an uncommon value?

For a prepared statement, could the planner produce *several* plans,
if it guesses great sensitivity to the parameter values?  Then it
could choose amongst them at run time.

- FChE

Re: Query much slower when run from postgres function

От
Tom Lane
Дата:
fche@redhat.com (Frank Ch. Eigler) writes:
> For a prepared statement, could the planner produce *several* plans,
> if it guesses great sensitivity to the parameter values?  Then it
> could choose amongst them at run time.

We've discussed that in the past.  "Choose at runtime" is a bit more
easily said than done though --- you can't readily flip between plan
choices part way through, if you've already emitted some result rows.

            regards, tom lane

Re: Query much slower when run from postgres function

От
decibel
Дата:
On Mar 9, 2009, at 8:36 AM, Mario Splivalo wrote:
> Now, as I was explained on pg-jdbc mailinglist, that 'SET
> enable_seqscan TO false' affects all queries on that persistent
> connection from tomcat, and It's not good solution. So I wanted to
> post here to ask what other options do I have.


FWIW, you can avoid that with SET LOCAL (though it would still affect
the rest of the transaction).

You could also store whatever enable_seqscan was set to in a variable
before setting it to false and then set it back when you're done.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



Re: Query much slower when run from postgres function

От
decibel
Дата:
On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:
> fche@redhat.com (Frank Ch. Eigler) writes:
>> For a prepared statement, could the planner produce *several* plans,
>> if it guesses great sensitivity to the parameter values?  Then it
>> could choose amongst them at run time.
>
> We've discussed that in the past.  "Choose at runtime" is a bit more
> easily said than done though --- you can't readily flip between plan
> choices part way through, if you've already emitted some result rows.

True, but what if we planned for both high and low cardinality cases,
assuming that pg_stats indicated both were a possibility? We would
have to store multiple plans for one prepared statement, which
wouldn't work well for more complex queries (if you did high and low
cardinality estimates for each table you'd end up with 2^r plans,
where r is the number of relations), so we'd need a way to cap it
somehow. Of course, whether that's easier than having the ability to
throw out a current result set and start over with a different plan
is up for debate...

On a related note, I wish there was a way to tell plpgsql not to pre-
plan a query. Sure, you can use EXECUTE, but building the query plan
is a serious pain in the rear.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



Re: Query much slower when run from postgres function

От
Віталій Тимчишин
Дата:


2009/3/14 decibel <decibel@decibel.org>
On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:
fche@redhat.com (Frank Ch. Eigler) writes:
For a prepared statement, could the planner produce *several* plans,
if it guesses great sensitivity to the parameter values?  Then it
could choose amongst them at run time.

We've discussed that in the past.  "Choose at runtime" is a bit more
easily said than done though --- you can't readily flip between plan
choices part way through, if you've already emitted some result rows.

True, but what if we planned for both high and low cardinality cases, assuming that pg_stats indicated both were a possibility? We would have to store multiple plans for one prepared statement, which wouldn't work well for more complex queries (if you did high and low cardinality estimates for each table you'd end up with 2^r plans, where r is the number of relations), so we'd need a way to cap it somehow. Of course, whether that's easier than having the ability to throw out a current result set and start over with a different plan is up for debate...

On a related note, I wish there was a way to tell plpgsql not to pre-plan a query. Sure, you can use EXECUTE, but building the query plan is a serious pain in the rear.

I'd say it would be great for PostgreSQL to replan each execution of query automatically if execution plan tells it would take some factor (say, x100, configurable) more time to execute query then to plan. In this case it would not spend many time planning for small queries, but will use the most efficient plan possible for long queries. And even if a query can't be run better, it would spend only 1/factor time more (1% more time for factor of 100).