Обсуждение: Odd behaviour -- Index scan vs. seq. scan

Поиск
Список
Период
Сортировка

Odd behaviour -- Index scan vs. seq. scan

От
Carlos Moreno
Дата:
I can't find a reasonable explanation for this.

I have a table game, with primary key gameid (an int).

If I use a where involving gameid and <, or >, or <=,
or >=, then I get a sequential scan.  If I use =, then
of course I get an Index scan.

More surprising to me is the fact that using BETWEEN,
I get an Index scan!!

The following is a copy-n-paste of the EXPLAIN outputs
(with the useless lines removed):

     explain delete from game where gameid = 1000;
     Index Scan using game_pkey on game  (cost=0.00..3.14 rows=1 width=6)

     explain delete from game where gameid < 1000;
     Seq Scan on game  (cost=0.00..4779.50 rows=200420 width=6)

     explain delete from game where gameid between 1000 and 2000;
     Index Scan using game_pkey on game  (cost=0.00..3.15 rows=1 width=6)


How's that possible?  Is it purposely done like this, or
is it a bug?  (BTW, Postgres version is 7.2.3)

Thanks,

Carlos
--


Re: Odd behaviour -- Index scan vs. seq. scan

От
Adam Kavan
Дата:
>
>     explain delete from game where gameid = 1000;
>     Index Scan using game_pkey on game  (cost=0.00..3.14 rows=1 width=6)
>
>     explain delete from game where gameid < 1000;
>     Seq Scan on game  (cost=0.00..4779.50 rows=200420 width=6)
>
>     explain delete from game where gameid between 1000 and 2000;
>     Index Scan using game_pkey on game  (cost=0.00..3.15 rows=1 width=6)
>
>
>How's that possible?  Is it purposely done like this, or
>is it a bug?  (BTW, Postgres version is 7.2.3)


Postgres thinks that for the = line there will only be 1 row so t uses an
index scan.  Same thing for the between.  However it thinks that there are
200420 rows below 1000 and decides a seq scan would be faster.  You can run
EXPLAIN ANALYZE to see if its guesses are correct.  You can also try SET
enable_seqscan = FALSE; to see if it is faster doing an index scan.  If it
is faster to do an index scan edit your postgres.conf file and lower the
cost for a random tuple,  etc.

--- Adam Kavan
--- akavan@cox.net



Re: Odd behaviour -- Index scan vs. seq. scan

От
"scott.marlowe"
Дата:
On Mon, 15 Sep 2003, Adam Kavan wrote:

>
> >
> >     explain delete from game where gameid = 1000;
> >     Index Scan using game_pkey on game  (cost=0.00..3.14 rows=1 width=6)
> >
> >     explain delete from game where gameid < 1000;
> >     Seq Scan on game  (cost=0.00..4779.50 rows=200420 width=6)
> >
> >     explain delete from game where gameid between 1000 and 2000;
> >     Index Scan using game_pkey on game  (cost=0.00..3.15 rows=1 width=6)
> >
> >
> >How's that possible?  Is it purposely done like this, or
> >is it a bug?  (BTW, Postgres version is 7.2.3)
>
>
> Postgres thinks that for the = line there will only be 1 row so t uses an
> index scan.  Same thing for the between.  However it thinks that there are
> 200420 rows below 1000 and decides a seq scan would be faster.  You can run
> EXPLAIN ANALYZE to see if its guesses are correct.  You can also try SET
> enable_seqscan = FALSE; to see if it is faster doing an index scan.  If it
> is faster to do an index scan edit your postgres.conf file and lower the
> cost for a random tuple,  etc.

Before you do that you might wanna issue this command:

alter table game alter column gameid set statistics 100;
analyze game;

and see what you get.


Re: Odd behaviour -- Index scan vs. seq. scan

От
Christopher Browne
Дата:
A long time ago, in a galaxy far, far away, moreno@mochima.com (Carlos Moreno) wrote:
> I can't find a reasonable explanation for this.
>
> I have a table game, with primary key gameid (an int).
>
> If I use a where involving gameid and <, or >, or <=,
> or >=, then I get a sequential scan.  If I use =, then
> of course I get an Index scan.
>
> More surprising to me is the fact that using BETWEEN,
> I get an Index scan!!
>
> The following is a copy-n-paste of the EXPLAIN outputs
> (with the useless lines removed):
>
>      explain delete from game where gameid = 1000;
>      Index Scan using game_pkey on game  (cost=0.00..3.14 rows=1 width=6)
>
>      explain delete from game where gameid < 1000;
>      Seq Scan on game  (cost=0.00..4779.50 rows=200420 width=6)
>
>      explain delete from game where gameid between 1000 and 2000;
>      Index Scan using game_pkey on game  (cost=0.00..3.15 rows=1 width=6)
>
>
> How's that possible?  Is it purposely done like this, or
> is it a bug?  (BTW, Postgres version is 7.2.3)

It would appear as though the statistics may be a bit stilted such
that the second query is being handled wrongly.

#1 and #3 are pretty clear...

 - In #1, it's using the index, correctly estimating that there are
   only a few rows with "gameid = 1000"

 - In #3, it's using the index, correctly estimating that there are
    few rows with gameid between 1000 and 2000.

It seems surprising that the optimizer is estimating that there are
200420 rows with gameid < 1000.

Is it possible that you did an ANALYZE a long while back, back when
you had an enormous number of rows with gameid < 1000?

Try running ANALYZE again on the table, and see if the estimates
change.
--
select 'aa454' || '@' || 'freenet.carleton.ca';
http://cbbrowne.com/info/advocacy.html
Why isn't phonetic spelled the way it sounds?