Re: [SQL] How to avoid "Out of memory" using aggregate functions?

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: [SQL] How to avoid "Out of memory" using aggregate functions?
Дата
Msg-id 18807.944497407@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: [SQL] How to avoid "Out of memory" using aggregate functions?  (Frank Joerdens <frank@x9media.com>)
Ответы Re: [SQL] How to avoid "Out of memory" using aggregate functions?  (Frank Joerdens <frank@x9media.com>)
Список pgsql-sql
Frank Joerdens <frank@x9media.com> writes:
> [ complex query ]
> which stopped working after the table grew over the 1 million mark
> (approximately) - I get the error

> FATAL 1:  Memory exhausted in AllocSetAlloc()

> This sounds to me like the problem described below . . . my question is:
> Would the changes you already made to the current sources fix it for me?

Afraid not.  I think what's killing you is all those date_part() and
date_trunc() operations --- the resultant datetime or float8 value from
each one occupies memory that won't get reclaimed till end of statement
:-(.

Silly as it sounds, you might be able to put off the problem by
rearranging the order of the WHERE clauses, remembering that AND stops
evaluating its subclauses as soon as it finds a FALSE.  Presumably the
day check eliminates many more rows than the time-of-day checks, so

WHERE
domains.internet = stunde_test.destination 
AND
date_trunc('day', stunde_test.date) = 'yesterday' 
AND
date_part('hour', stunde_test.date) > 6 
AND 
date_part('hour', stunde_test.date) < 23 

would probably about halve the number of date_part+date_trunc
calculations done.

Of course the real fix is to recycle temporary memory for all
expressions intra-query, but I do not know if that will get done for
7.0.  It will get done eventually.
        regards, tom lane


В списке pgsql-sql по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [SQL] pqReadData() error
Следующее
От: Marcio Macedo
Дата:
Сообщение: getting table info