Re: problem with lost connection while running long PL/R query

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: problem with lost connection while running long PL/R query
Дата
Msg-id 15339.1368718822@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: problem with lost connection while running long PL/R query  ("David M. Kaplan" <david.kaplan@ird.fr>)
Ответы Re: problem with lost connection while running long PL/R query
Список pgsql-general
"David M. Kaplan" <david.kaplan@ird.fr> writes:
> Thanks for the help.  You have definitely identified the problem, but I
> am still looking for a solution that works for me.  I tried setting
> vm.overcommit_memory=2, but this just made the query crash quicker than
> before, though without killing the entire connection to the database.  I
> imagine that this means that I really am trying to use more memory than
> the system can handle?

> I am wondering if there is a way to tell postgresql to flush a set of
> table lines out to disk so that the memory they are using can be
> liberated.

Assuming you don't have work_mem set to something unreasonably large,
it seems likely that the excessive memory consumption is inside your
PL/R function, and not the fault of Postgres per se.  You might try
asking in some R-related forums about how to reduce the code's memory
usage.

Also, if by "crash" this time you meant you got an "out of memory" error
from Postgres, there should be a memory map in the postmaster log
showing all the memory consumption Postgres itself is aware of.  If that
doesn't add up to a lot, it would be pretty solid proof that the problem
is inside R.  If there are any memory contexts that seem to have bloated
unreasonably, knowing which one(s) would be helpful information.

            regards, tom lane


В списке pgsql-general по дате отправления:

Предыдущее
От: Matt Brock
Дата:
Сообщение: Re: Deploying PostgreSQL on CentOS with SSD and Hardware RAID
Следующее
От: Ramsey Gurley
Дата:
Сообщение: Re: Tuning read ahead