Re: Slow query with a lot of data

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Slow query with a lot of data
Дата
Msg-id b42b73150808211008g43ac1fcer26e5eaa2420ab6ca@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Slow query with a lot of data  (Moritz Onken <onken@houseofdesign.de>)
Ответы Re: Slow query with a lot of data
Список pgsql-performance
On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken <onken@houseofdesign.de> wrote:
>
> Am 21.08.2008 um 16:39 schrieb Scott Carey:
>
>> It looks to me like the work_mem did have an effect.
>>
>> Your earlier queries had a sort followed by group aggregate at the top,
>> and now its a hash-aggregate.  So the query plan DID change.  That is likely
>> where the first 10x performance gain came from.
>
> But it didn't change as I added the sub select.
> Thank you guys very much. The speed is now ok and I hope I can finish tihs
> work soon.
>
> But there is another problem. If I run this query without the limitation of
> the user id, postgres consumes about 150GB of disk space and dies with
>
> ERROR:  could not write block 25305351 of temporary file: No space left on
> device
>
> After that the avaiable disk space is back to normal.
>
> Is this normal? The resulting table (setup1) is not bigger than 1.5 GB.

Maybe the result is too big.  if you explain the query, you should get
an estimate of rows returned.  If this is the case, you need to
rethink your query or do something like a cursor to browse the result.

merlin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Moritz Onken
Дата:
Сообщение: Re: Slow query with a lot of data
Следующее
От: Dan Harris
Дата:
Сообщение: The state of PG replication in 2008/Q2?