Re: sub select performance due to seq scans

Поиск
Список
Период
Сортировка
От H Hale
Тема Re: sub select performance due to seq scans
Дата
Msg-id 20060731141427.87706.qmail@web88002.mail.re2.yahoo.com
обсуждение исходный текст
Ответ на Re: sub select performance due to seq scans  (Richard Huxton <dev@archonet.com>)
Ответы Re: sub select performance due to seq scans
Список pgsql-performance


Look at that second seq-scan (on flatommemberrelation) - it's looping
5844 times (once for each row in flatmfilesysentry). I'd expect PG to
materialise the seq-scan once and then join (unless I'm missing
something, the subselect just involves the one test against a constant).

I'm guessing something in your configuration is pushing your cost
estimates far away from reality. Could you try issuing a "set
enable_seqscan=off" and then running explain-analyse again. That will
show us alternatives.

Also, what performance-related configuration values have you changed?
Could you post them with a brief description of your hardware?

--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster
The hardware is XEON 3GHZ P4 2GB Memory with 80GB SATA drive.
Kernel.SHMMAX=128MB

The following config changes have been made from the defaults...

shared_buffers = 8000            # min 16 or max_connections*2, 8KB each
max_fsm_pages = 50000            # min max_fsm_relations*16, 6 bytes each
vacuum_cost_delay = 10            # 0-1000 milliseconds
stats_start_collector = on
stats_row_level = on
autovacuum = on                # enable autovacuum subprocess?
autovacuum_naptime = 20        # time between autovacuum runs, in secs
autovacuum_vacuum_threshold = 500    # min # of tuple updates before# vacuum
autovacuum_analyze_threshold = 250    # min # of tuple updates before

Here is the query plan...

capsa=# set enable_seqscan=off;
SET
Time: 0.478 ms
capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');
                                                                              QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=873.32..1017581.78 rows=6476 width=14) (actual time=80.402..241.881 rows=6473 loops=1)
   ->  Unique  (cost=871.32..903.68 rows=3229 width=16) (actual time=80.315..113.282 rows=6473 loops=1)
         ->  Sort  (cost=871.32..887.50 rows=6473 width=16) (actual time=80.310..94.279 rows=6473 loops=1)
               Sort Key: flatommemberrelation.dstobj
               ->  Bitmap Heap Scan on flatommemberrelation  (cost=56.66..461.57 rows=6473 width=16) (actual time=2.613..14.229 rows=6473 loops=1)
                     Recheck Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)
                     ->  Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx  (cost=0.00..56.66 rows=6473 width=0) (actual time=2.344..2.344 rows=6473 loops=1)
                           Index Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)
   ->  Bitmap Heap Scan on flatomfilesysentry  (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)
         Recheck Cond: (flatomfilesysentry.objectid = "outer".dstobj)
         ->  Bitmap Index Scan on flatomfilesysentry_pkey  (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)
               Index Cond: (flatomfilesysentry.objectid = "outer".dstobj)
 Total runtime: 251.611 ms
(13 rows)

Time: 252.825 ms

I went back to the stock conf settings, did a vaccuum full analyze and still get the same results.

Background...

We have spikes of activty where both tables get rows inserted & have many updates. During this time performance drops.
I have been experimenting with auto vac settings as vaccuuming was helping although query performance
did not return to normal until after the activity spike.
In this case ( and I not sure why yet) vac made no difference.






В списке pgsql-performance по дате отправления:

Предыдущее
От: Mark Lewis
Дата:
Сообщение: Re: directory tree query with big planner variation
Следующее
От: Tom Lane
Дата:
Сообщение: Re: sub select performance due to seq scans