Обсуждение: ERROR: out of memory DETAIL: Failed on request of size ???

Поиск
Список
Период
Сортировка

ERROR: out of memory DETAIL: Failed on request of size ???

От
Brian Wong
Дата:
We'd like to seek out your expertise on postgresql regarding this error that we're getting in an analytical database.

Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
postgresql version: 9.1.9
shared_buffers: 18GB

We're doing a grouping query like this:

create table test as
select col1, max(col2), max(col3), max(col4), max(col5)
from view
where ...
and ...
and ...
group by col1;

The view queries another view that does a lot of UNION ALLs of tables.  This particular case I'm dealing with about 60 tables unioned together.

The resulting error is:
ERROR: out of memory
DETAIL: Failed on request of size ???

The issue seems a straight match with this link I found online: http://stackoverflow.com/questions/11878035/postgresql-9-1-out-of-memory-during-create-table-as-select

After doing a lot of googling, I've tried setting FETCH_COUNT on psql AND/OR setting work_mem.  I'm just not able to work around this issue, unless if I take most of the MAX() functions out but just one.

Would anyone give me some hints on how to resolve this issue?

Brian

Re: ERROR: out of memory DETAIL: Failed on request of size ???

От
bricklen
Дата:
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwong64@hotmail.com> wrote:
We'd like to seek out your expertise on postgresql regarding this error that we're getting in an analytical database.

Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3
postgresql version: 9.1.9
shared_buffers: 18GB

After doing a lot of googling, I've tried setting FETCH_COUNT on psql AND/OR setting work_mem.  I'm just not able to work around this issue, unless if I take most of the MAX() functions out but just one.

What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That seems about 2 to 3 times beyond what you probably want.

Re: ERROR: out of memory DETAIL: Failed on request of size ???

От
bricklen
Дата:

On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong <bwong64@hotmail.com> wrote:
I've tried any work_mem value from 1gb all the way up to 40gb, with no effect on the error.  I'd like to think of this problem as a server process memory (not the server's buffers) or client process memory issue, primarily because when we tested the error there was no other load whatsoever.  Unfortunately,  the error doesn't say what kinda memory ran out.

I wasn't asking because I thought you should make it higher, I think you should make it lower. Set it to 200MB and reload your conf files ("select pg_reload_conf()") and try your queries again. work_mem is a per step setting, used by aggregates and sort steps, potentially multiple times in a single query, also multiplied by any other concurrent queries. In this case, it might not be the cause, but certainly try a lower setting to rule it out.

Re: ERROR: out of memory DETAIL: Failed on request of size ???

От
Pavel Stehule
Дата:
Hello

I reported  similar problem week ago - Postgres releases work_mem (assigned for every SELECT in union) after query finishing. So large SELECT UNION ALL SELECT UNION ALL .. queries require lot of memory. My customer reported significant problems for 100 unions. He had to migrate to 64bit pg with large swap or (first, but probably not good solution) significantly decreases work_mem. Probably better solution is using temporary table in this moment.

Regards

Pavel Stehule


2013/11/19 bricklen <bricklen@gmail.com>

On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong <bwong64@hotmail.com> wrote:
I've tried any work_mem value from 1gb all the way up to 40gb, with no effect on the error.  I'd like to think of this problem as a server process memory (not the server's buffers) or client process memory issue, primarily because when we tested the error there was no other load whatsoever.  Unfortunately,  the error doesn't say what kinda memory ran out.

I wasn't asking because I thought you should make it higher, I think you should make it lower. Set it to 200MB and reload your conf files ("select pg_reload_conf()") and try your queries again. work_mem is a per step setting, used by aggregates and sort steps, potentially multiple times in a single query, also multiplied by any other concurrent queries. In this case, it might not be the cause, but certainly try a lower setting to rule it out.