Re: Memory exhaustion due to temporary tables?

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Memory exhaustion due to temporary tables?
Дата
Msg-id 5062.1546547894@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Memory exhaustion due to temporary tables?  (Thomas Carroll <tomfecarroll@yahoo.com>)
Список pgsql-general
[ please don't re-quote the entire thread when replying ]

Thomas Carroll <tomfecarroll@yahoo.com> writes:
>     On Monday, December 10, 2018, 7:45:07 PM EST, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> You said you'd been able to reproduce this problem outside production.
>> Any chance you could boil that down to a sharable test case?

> The test case is simple - so simple that I am surprised no one has encountered
> this before, and therefore I am really more fearing that I am doing something
> stupid in my effort to make it shareable and simple.  In creating this example
> I have not run the code till it crashed the server; instead I used the
> measuring methodology you will see below (monitoring the smaps file).

Hmm.  So what this test case is doing is repeatedly creating a temp table
with ON COMMIT DROP, querying that table, and then closing the transaction
(allowing the temp table to go away).  It does leak memory in
CacheMemoryContext, and as far as I can tell, the leakage consists
entirely of negative catcache entries for pg_statistic rows.  That happens
because while querying the temp table, the planner tries to look up
statistics for the table; but there are none, since you haven't ANALYZEd
it.  So negative catcache entries get made in hopes of saving the cost of
probing pg_statistic again later.  But, when we drop the table, those
catcache entries don't get cleared because they do not match any catalog
rows that get deleted during the drop.

In a lot of use-cases, this wouldn't matter too much, either because the
session isn't long-lived enough to accumulate huge numbers of negative
entries, or because other catalog activity causes the entries to get
flushed anyway.  But if you don't have much DDL going on other than
this temp table activity, then yeah it could build up.

Not sure about good ways to fix this.  I can think of various more-or-less
klugy fixes that are specific to the pg_statistic case.  There's been some
ongoing discussion about trying to limit accumulation of negative catcache
entries more generally, but that's not very close to being committable
I think.

In the meantime, you might think about switching over to some process
that doesn't create and drop the same table constantly.  Perhaps
along the lines of

create temp table if not exists tt_preTally (...) on commit delete rows;

if (tt_preTally contains no rows) then
   insert into tt_preTally select ...
end if;

This'd have the advantage of reducing catalog churn in other catalogs
besides pg_statistic, too.

            regards, tom lane


В списке pgsql-general по дате отправления:

Предыдущее
От: Glenn Schultz
Дата:
Сообщение: Re: query with regular expression
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: getting pg_basebackup to use remote destination