Re: Large # of rows in query extremely slow, not using

Поиск
Список
Период
Сортировка
От Pierre-Frédéric Caillaud
Тема Re: Large # of rows in query extremely slow, not using
Дата
Msg-id opseb0ctjmcq72hf@musicbox
обсуждение исходный текст
Ответ на Re: Large # of rows in query extremely slow, not using  (Markus Schaber <schabios@logi-track.com>)
Ответы Re: Large # of rows in query extremely slow, not using  (Stephen Crowley <stephen.crowley@gmail.com>)
Список pgsql-performance
>> I have a table with ~8 million rows and I am executing a query which
>> should return about ~800,000 rows. The problem is that as soon as I
>> execute the query it absolutely kills my machine and begins swapping
>> for 5 or 6 minutes before it begins returning results. Is postgres
>> trying to load the whole query into memory before returning anything?
>> Also, why would it choose not to use the index? It is properly
>> estimating the # of rows returned. If I set enable_seqscan to off it
>> is just as slow.

    1; EXPLAIN ANALYZE.

    Note the time it takes. It should not swap, just read data from the disk
(and not kill the machine).

    2; Run the query in your software

    Note the time it takes. Watch RAM usage. If it's vastly longer and you're
swimming in virtual memory, postgres is not the culprit... rather use a
cursor to fetch a huge resultset bit by bit.

    Tell us what you find ?

    Regards.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Vivek Khera
Дата:
Сообщение: Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables
Следующее
От: "aaron werman"
Дата:
Сообщение: Re: Data Warehouse Reevaluation - MySQL vs Postgres -- merge tables