Обсуждение: Excessive rows/tuples seriously degrading query performance

Поиск
Список
Период
Сортировка

Excessive rows/tuples seriously degrading query performance

От
"Chadwick, Russell"
Дата:
 
Hello everyone.
Can anyone explain why this table which has never had more than a couple rows in it shows > 500k in the query planner even after running vacuum full.  Its terribly slow to return 2 rows of data.  The 2 rows in it are being updated a lot but I couldn't find any explanation for this behavior.  Anything I could try besides droping db and recreating? 
Thanks - Russ
 
toolshed=# explain analyze select * from stock_log_positions ;
                                                       QUERY PLAN                                                       
-------------------------------------------------------------------------------------------------------------------------
 Seq Scan on stock_log_positions  (cost=0.00..10907.77 rows=613577 width=22) (actual time=701.39..701.41 rows=2 loops=1)
 Total runtime: 701.54 msec
(2 rows)
 
toolshed=# vacuum full analyze verbose stock_log_positions;
INFO:  --Relation public.stock_log_positions--
INFO:  Pages 4773: Changed 1, reaped 767, Empty 0, New 0; Tup 613737: Vac 57620, Keep/VTL 613735/613713, UnUsed 20652, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 4322596/4322596; EndEmpty/Avail. Pages 0/4773.
        CPU 9.11s/13.68u sec elapsed 22.94 sec.
INFO:  Index idx_stock_log_positions_when_log_filename: Pages 9465; Tuples 613737: Deleted 57620.
        CPU 1.55s/1.27u sec elapsed 6.69 sec.
INFO:  Rel stock_log_positions: Pages: 4773 --> 4620; Tuple(s) moved: 59022.
        CPU 1.00s/4.45u sec elapsed 8.83 sec.
INFO:  Index idx_stock_log_positions_when_log_filename: Pages 9778; Tuples 613737: Deleted 2897.
        CPU 1.32s/0.44u sec elapsed 6.23 sec.
INFO:  Analyzing public.stock_log_positions
VACUUM
 
toolshed=# explain analyze select * from stock_log_positions ;
                                                       QUERY PLAN                                                       
-------------------------------------------------------------------------------------------------------------------------
 Seq Scan on stock_log_positions  (cost=0.00..10757.37 rows=613737 width=22) (actual time=789.21..789.24 rows=2 loops=1)
 Total runtime: 789.40 msec
(2 rows)
 
toolshed=# select * from stock_log_positions ;
  when_log  |   filename   | position
------------+--------------+----------
 2003-12-11 | ActiveTrader |        0
 2003-12-11 | Headlines    |        0
(2 rows)

Re: Excessive rows/tuples seriously degrading query

От
Hannu Krosing
Дата:
Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:
>
> Hello everyone.
> Can anyone explain why this table which has never had more than a
> couple rows in it shows > 500k in the query planner even after running
> vacuum full.  Its terribly slow to return 2 rows of data.  The 2 rows
> in it are being updated a lot but I couldn't find any explanation for
> this behavior.

It can be that there is an idle transaction somewhere that has locked a
lot of rows (i.e. all your updates have been running inside the same
transaction for hour or days)

try:
$ ps ax| grep post

on my linux box this gives

 1683 ?      S    0:00 /usr/bin/postmaster -p 5432
 1704 ?      S    0:00 postgres: stats buffer process
 1705 ?      S    0:00 postgres: stats collector process
 5520 ?      S    0:00 postgres: hu hannu [local] idle in transaction
 5524 pts/2  S    0:00 grep post

where backend 5520 seems to be the culprit.

>  Anything I could try besides droping db and recreating?

make sure that no other backend is connected to db and do your
> vacuum full; analyze;


or if there seems to be something unidentifieable making your table
unusable, then just recreate that table:

begin;
create table stock_log_positions_tmp
    as select * from stock_log_positions;
drop table stock_log_positions;
alter table stock_log_positions_tmp
     rename to stock_log_positions;
-- if you have any constraints, indexes or foreign keys
-- then recreate them here as well
commit;

> Thanks - Russ
>
---------------
hannu


Re: Excessive rows/tuples seriously degrading query

От
Tom Lane
Дата:
Hannu Krosing <hannu@tm.ee> writes:
> Chadwick, Russell kirjutas L, 13.12.2003 kell 00:40:
>> Can anyone explain why this table which has never had more than a
>> couple rows in it shows > 500k in the query planner even after running
>> vacuum full.

> It can be that there is an idle transaction somewhere that has locked a
> lot of rows (i.e. all your updates have been running inside the same
> transaction for hour or days)

In fact an old open transaction is surely the issue, given that the
VACUUM report shows a huge number of "kept" tuples:

>> INFO:  Pages 4773: Changed 1, reaped 767, Empty 0, New 0; Tup 613737: Vac 57620, Keep/VTL 613735/613713, UnUsed
20652,MinLen 52, MaxLen 52; Re-using: Free/Avail. Space 4322596/4322596; EndEmpty/Avail. Pages 0/4773. 
>>         CPU 9.11s/13.68u sec elapsed 22.94 sec.

"Keep" is the number of tuples that are committed dead but can't be
removed yet because there is some other open transaction that is old
enough that it should be able to see them if it looks.

Apparently the access pattern on this table is constant updates of the
two logical rows, leaving lots and lots of dead versions.  You need to
vacuum it more often to keep down the amount of deadwood, and you need
to avoid having very-long-running transactions open when you vacuum.

            regards, tom lane