Re: Large number of partitions of a table

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Large number of partitions of a table
Дата
Msg-id 268771.1642489687@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Large number of partitions of a table  (Victor Sudakov <vas@sibptus.ru>)
Ответы Re: Large number of partitions of a table  (Victor Sudakov <vas@sibptus.ru>)
Список pgsql-admin
Victor Sudakov <vas@sibptus.ru> writes:
> Tom Lane wrote:
>> Well, yeah, actually.  An ill-advised query will blow out your backend's
>> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
>> killer[1] (if you're on Linux), resulting in a backend crash and cluster
>> restart.  

> Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
> even if t has 10000 partitions?

Sure, COUNT(*)'s runtime memory consumption is negligible.
But you're not thinking about overhead --- specifically,

1. 10000 relcache entries for the base tables.

2. If you have N indexes per table, N*10000 relcache entries for
the indexes.  (The planner will probably have looked at all those
indexes, even if it didn't find any use for them.)

3. 10000 SeqScan plan nodes and associated rangetable entries,

4. Likewise, 10000 instances of executor per-node state.

5. 10000 lock table entries (both shared and local lock tables).

6. Probably a few per-relation things I didn't think of.

Admittedly, each of those individual data structures is going
to be at most in the few-kilobytes range, so this is going to total
somewhere in the hundreds of megabytes, which you can probably
tolerate on modern hardware --- unless some dozens of sessions
are all doing the same kind of thing.

Basically, this is likely to work until it doesn't ... and
when it doesn't, you're going to be in the office at 3AM
trying to wrangle a recalcitrant system.  Just sayin'.

            regards, tom lane



В списке pgsql-admin по дате отправления:

Предыдущее
От: Daulat
Дата:
Сообщение: pgbackrest error
Следующее
От: Ron
Дата:
Сообщение: Re: pgbackrest error