Re: Large # of rows in query extremely slow, not using

Поиск
Список
Период
Сортировка
От Markus Schaber
Тема Re: Large # of rows in query extremely slow, not using
Дата
Msg-id 20040914184358.08e271fe@kingfisher.intern.logi-track.com
обсуждение исходный текст
Ответ на Large # of rows in query extremely slow, not using index  (Stephen Crowley <stephen.crowley@gmail.com>)
Ответы Re: Large # of rows in query extremely slow, not using  (Pierre-Frédéric Caillaud<lists@boutiquenumerique.com>)
Список pgsql-performance
Hi, Stephen,

On Mon, 13 Sep 2004 19:51:22 -0500
Stephen Crowley <stephen.crowley@gmail.com> wrote:

> Does postgres cache the entire result set before it begins returning
> data to the client?
>
> I have a table with ~8 million rows and I am executing a query which
> should return about ~800,000 rows. The problem is that as soon as I
> execute the query it absolutely kills my machine and begins swapping
> for 5 or 6 minutes before it begins returning results. Is postgres
> trying to load the whole query into memory before returning anything?
> Also, why would it choose not to use the index? It is properly
> estimating the # of rows returned. If I set enable_seqscan to off it
> is just as slow.

As you get about 10% of all rows in the table, the query will hit every
page of the table.

Maybe it helps to CLUSTER the table using the index on your query
parameters, and then set enable_seqscan to off.

But beware, that you have to re-CLUSTER after modifications.

HTH,
Markus



--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Harald Lau (Sector-X)"
Дата:
Сообщение: Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables
Следующее
От: Vivek Khera
Дата:
Сообщение: Re: Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options