Re: Prevent out of memory errors by reducing work_mem?

Поиск
Список
Период
Сортировка
От Jan Strube
Тема Re: Prevent out of memory errors by reducing work_mem?
Дата
Msg-id 51062D29.2040207@deriva.de
обсуждение исходный текст
Ответ на Re: Prevent out of memory errors by reducing work_mem?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
Hi,

you are right.
We were running 9.1.4 and after upgrading to 9.1.7 the error disappeared.

Thanks a lot,
JanStrube


>> I'm getting an out of memory error running the following query over 6
>> tables (the *BASE* tables have over 1 million rows each) on Postgresql
>> 9.1. The machine has 4GB RAM:
> It looks to me like you're suffering an executor memory leak that's
> probably unrelated to the hash joins as such.  The leak is in the
> ExecutorState context:
>
>> ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
>> chunks); 3437811560 used
> while the subsidiary HashXYZ contexts don't look like they're going
> beyond what they've been told to.
>
> So the first question is 9.1.what?  We've fixed execution-time memory
> leaks as recently as 9.1.7.
>
> If you're on 9.1.7, or if after updating you can still reproduce the
> problem, please see if you can create a self-contained test case.
> My guess is it would have to do with the specific data types and
> operators being used in the query, but not so much with the specific
> data, so you probably could create a test case that just uses tables
> filled with generated random data.
>
>             regards, tom lane
>


В списке pgsql-general по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: Optimizing select count query which often takes over 10 seconds
Следующее
От: Alexander Farber
Дата:
Сообщение: Re: Optimizing select count query which often takes over 10 seconds