Обсуждение: Indexes?

Поиск
Список
Период
Сортировка

Indexes?

От
Mike Leahy
Дата:
Hello all,

I have a question related to the use of indexes.  One of my tables is part of
a census, with over 2.5 million records in it.  Clearly, just about any query
takes a rather long time to complete.  I was hoping to reduce the amount of
time by using indexes.  However, no matter what I do, PostgreSQL never seems
to use them.  It would seem to make sense that if I calculate something
grouped by a set of variables that have an index created for them, that the
index should be used rather than a sequential search, but it doesn't.  I have
a table with an index such as the one created below:

CREATE INDEX tbl_censo_poblacion_1993_manzana_idx ON tbl_censo_poblacion_1993
( dubicacion, zona, manzana );

Then I try a query such as:

select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
from tbl_censo_poblacion_1993 group by dubicacion, zona, manzana;

The results from explain indicate that a sequential scan is used (as far as I
can tell).  I tried adding where statement:

select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
from tbl_censo_poblacion_1993 where dubicacion <> '' and zona <> '' and
manzana <> '' group by dubicacion, zona, manzana;

The EXPLAIN analysis results appear as follows:

Aggregate  (cost=847587.90..879024.28 rows=251491 width=27) (actual
time=272782.00..279458.00 rows=21459 loops=1)
   ->  Group  (cost=847587.90..872737.01 rows=2514911 width=27) (actual
time=272782.00..278546.00 rows=2553015 loops=1)
         ->  Sort  (cost=847587.90..853875.18 rows=2514911 width=27) (actual
time=272782.00..274533.00 rows=2553015 loop
s=1)
               Sort Key: dubicacion, zona, manzana
               ->  Seq Scan on tbl_censo_poblacion_1993  (cost=0.00..328346.76
rows=2514911 width=27) (actual time=0.00. .189570.00 rows=2553015 loops=1)
                     Filter: ((dubicacion <> ''::character varying) AND (zona
<> ''::character varying) AND (manzana <> ''::character varying))  Total
runtime: 279494.00 msec (7 rows)


So...can anyone suggest to me what I could do?  I'm using PostgreSQL 7.3.4-2
in Cygwin on a WinXP platform.

As an aside, is there any way to increase the amount of memory allocated to
the postmaster.exe process?  It seems to me that if I could add more than 4MB
that it has by default, then maybe that could increase the performance.

Any help is appreciated.

Kind regards,
Mike




----------------------------------------
This mail sent through www.mywaterloo.ca

Re: Indexes?

От
Martijn van Oosterhout
Дата:
In general, the <> operator can't be indexed since it will match most rows,
in which case you're better off scanning the table anyway.

If you really run that query often, perhaps the answer is a partial index.

On Tue, Oct 14, 2003 at 09:57:05PM -0400, Mike Leahy wrote:
> Hello all,
>
> I have a question related to the use of indexes.  One of my tables is part of
> a census, with over 2.5 million records in it.  Clearly, just about any query
> takes a rather long time to complete.  I was hoping to reduce the amount of
> time by using indexes.  However, no matter what I do, PostgreSQL never seems
> to use them.  It would seem to make sense that if I calculate something
> grouped by a set of variables that have an index created for them, that the
> index should be used rather than a sequential search, but it doesn't.  I have
> a table with an index such as the one created below:
>
> CREATE INDEX tbl_censo_poblacion_1993_manzana_idx ON tbl_censo_poblacion_1993
> ( dubicacion, zona, manzana );
>
> Then I try a query such as:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 group by dubicacion, zona, manzana;
>
> The results from explain indicate that a sequential scan is used (as far as I
> can tell).  I tried adding where statement:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 where dubicacion <> '' and zona <> '' and
> manzana <> '' group by dubicacion, zona, manzana;
>
> The EXPLAIN analysis results appear as follows:
>
> Aggregate  (cost=847587.90..879024.28 rows=251491 width=27) (actual
> time=272782.00..279458.00 rows=21459 loops=1)
>    ->  Group  (cost=847587.90..872737.01 rows=2514911 width=27) (actual
> time=272782.00..278546.00 rows=2553015 loops=1)
>          ->  Sort  (cost=847587.90..853875.18 rows=2514911 width=27) (actual
> time=272782.00..274533.00 rows=2553015 loop
> s=1)
>                Sort Key: dubicacion, zona, manzana
>                ->  Seq Scan on tbl_censo_poblacion_1993  (cost=0.00..328346.76
> rows=2514911 width=27) (actual time=0.00. .189570.00 rows=2553015 loops=1)
>                      Filter: ((dubicacion <> ''::character varying) AND (zona
> <> ''::character varying) AND (manzana <> ''::character varying))  Total
> runtime: 279494.00 msec (7 rows)
>
>
> So...can anyone suggest to me what I could do?  I'm using PostgreSQL 7.3.4-2
> in Cygwin on a WinXP platform.
>
> As an aside, is there any way to increase the amount of memory allocated to
> the postmaster.exe process?  It seems to me that if I could add more than 4MB
> that it has by default, then maybe that could increase the performance.
>
> Any help is appreciated.
>
> Kind regards,
> Mike
>
>
>
>
> ----------------------------------------
> This mail sent through www.mywaterloo.ca
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> "All that is needed for the forces of evil to triumph is for enough good
> men to do nothing." - Edmond Burke
> "The penalty good people pay for not being interested in politics is to be
> governed by people worse than themselves." - Plato

Вложения

Re: Indexes?

От
Stephan Szabo
Дата:
On Tue, 14 Oct 2003, Mike Leahy wrote:

> CREATE INDEX tbl_censo_poblacion_1993_manzana_idx ON tbl_censo_poblacion_1993
> ( dubicacion, zona, manzana );
>
> Then I try a query such as:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 group by dubicacion, zona, manzana;
>
> The results from explain indicate that a sequential scan is used (as far as I
> can tell).  I tried adding where statement:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 where dubicacion <> '' and zona <> '' and
> manzana <> '' group by dubicacion, zona, manzana;
>
> The EXPLAIN analysis results appear as follows:
>
> Aggregate  (cost=847587.90..879024.28 rows=251491 width=27) (actual
> time=272782.00..279458.00 rows=21459 loops=1)
>    ->  Group  (cost=847587.90..872737.01 rows=2514911 width=27) (actual
> time=272782.00..278546.00 rows=2553015 loops=1)
>          ->  Sort  (cost=847587.90..853875.18 rows=2514911 width=27) (actual
> time=272782.00..274533.00 rows=2553015 loop
> s=1)
>                Sort Key: dubicacion, zona, manzana
>                ->  Seq Scan on tbl_censo_poblacion_1993  (cost=0.00..328346.76
> rows=2514911 width=27) (actual time=0.00. .189570.00 rows=2553015 loops=1)

Note how many rows it's estimating (and actually) getting matches for.
It appears to be matching nearly every row so indexes aren't going to
really help much here for PostgreSQL since it still has to go back to the
actual table to find out whether or not the row is visible to your
transaction.

What does vacuum verbose tbl_censo_poblacion_1993; say?  Specifically, how
many pages does the table have?

> As an aside, is there any way to increase the amount of memory allocated to
> the postmaster.exe process?  It seems to me that if I could add more than 4MB
> that it has by default, then maybe that could increase the performance.

You might want to raise both shared_buffers and sort_mem from their
default values in postgresql.conf.

For more information you should probably read:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html

Re: Indexes?

От
Mike Leahy
Дата:
Stephan, thanks for the response.

I increased those variables you suggested, and that seems to have increased
the memory allocated to the process in windows.

Also, I tried the same query I was using, but with some actual values
specified in the where statement - that got it to use the indexes.  The only
thing is, I would normally be joining such a statement to another table, in
which case there wouldn't be a where statement.  I don't think that it uses
indexes in that case, even if the number of rows being used are a fraction of
what's in the table.

Regarding the vacuum results, here they are:

INFO:  --Relation public.tbl_censo_poblacion_1993--
INFO:  Pages 283669: Changed 0, Empty 0; Tup 2553015: Vac 0, Keep 0, UnUsed 0.
 150106 Total CPU 5.89s/2.90u sec elapsed 56.52 sec.
VACUUM6

What do you make of these results?

Thanks for your help.

Mike

-----Original Message-----
From: Stephan Szabo [mailto:sszabo@megazone.bigpanda.com]
Sent: October 15, 2003 12:11 AM
To: Mike Leahy
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Indexes?

On Tue, 14 Oct 2003, Mike Leahy wrote:

> CREATE INDEX tbl_censo_poblacion_1993_manzana_idx ON tbl_censo_poblacion_1993
> ( dubicacion, zona, manzana );
>
> Then I try a query such as:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 group by dubicacion, zona, manzana;
>
> The results from explain indicate that a sequential scan is used (as far as I
> can tell).  I tried adding where statement:
>
> select count(*) as POBLACION, (dubicacion || zona || manzana) as COD_MANZANA
> from tbl_censo_poblacion_1993 where dubicacion <> '' and zona <> '' and
> manzana <> '' group by dubicacion, zona, manzana;
>
> The EXPLAIN analysis results appear as follows:
>
> Aggregate  (cost=847587.90..879024.28 rows=251491 width=27) (actual
> time=272782.00..279458.00 rows=21459 loops=1)
>    ->  Group  (cost=847587.90..872737.01 rows=2514911 width=27) (actual
> time=272782.00..278546.00 rows=2553015 loops=1)
>          ->  Sort  (cost=847587.90..853875.18 rows=2514911 width=27) (actual
> time=272782.00..274533.00 rows=2553015 loop
> s=1)
>                Sort Key: dubicacion, zona, manzana
>                ->  Seq Scan on tbl_censo_poblacion_1993
(cost=0.00..328346.76
> rows=2514911 width=27) (actual time=0.00. .189570.00 rows=2553015 loops=1)

Note how many rows it's estimating (and actually) getting matches for.
It appears to be matching nearly every row so indexes aren't going to
really help much here for PostgreSQL since it still has to go back to the
actual table to find out whether or not the row is visible to your
transaction.

What does vacuum verbose tbl_censo_poblacion_1993; say?  Specifically, how
many pages does the table have?

> As an aside, is there any way to increase the amount of memory allocated to
> the postmaster.exe process?  It seems to me that if I could add more than 4MB
> that it has by default, then maybe that could increase the performance.

You might want to raise both shared_buffers and sort_mem from their
default values in postgresql.conf.

For more information you should probably read:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html


----------------------------------------
This mail sent through www.mywaterloo.ca

Re: Indexes?

От
Stephan Szabo
Дата:
On Wed, 15 Oct 2003, Mike Leahy wrote:

> I increased those variables you suggested, and that seems to have increased
> the memory allocated to the process in windows.
>
> Also, I tried the same query I was using, but with some actual values
> specified in the where statement - that got it to use the indexes.  The only
> thing is, I would normally be joining such a statement to another table, in
> which case there wouldn't be a where statement.  I don't think that it uses
> indexes in that case, even if the number of rows being used are a fraction of
> what's in the table.

You'll need to try it and give explain (analyze) results because joins
give their own chances for indexes to be used, so I wouldn't want to
speculate without more info.

One other thing to try is to set enable_seqscan=off; before running the
explain analyze and compare the results to when you haven't done the set.
That gives a large cost disbenefit to choosing a seqscan. In the original
query it's entirely possible that this will be more expensive.  If it
isn't (or it's really close), you may want to also look into lowering
random_page_cost in the postgresql.conf file.

> Regarding the vacuum results, here they are:
>
> INFO:  --Relation public.tbl_censo_poblacion_1993--
> INFO:  Pages 283669: Changed 0, Empty 0; Tup 2553015: Vac 0, Keep 0, UnUsed 0.
>  150106 Total CPU 5.89s/2.90u sec elapsed 56.52 sec.
> VACUUM6
>
> What do you make of these results?

Well, that'd imply that the records are about 1k a piece on average, or
you have dead space in the table.  I'm still thinking that 189s to read
284k pages is a bit much (about 1.5x the time on an equivalent number of
pages on my not optimized dev box), but I don't know how good your
hardware is and you are running in cygwin which probably doesn't help.