Re: Postgres is not able to handle more than 4k tables!?

Поиск
Список
Период
Сортировка
От Konstantin Knizhnik
Тема Re: Postgres is not able to handle more than 4k tables!?
Дата
Msg-id 8ea3b531-9b9f-896b-990b-902e179674b3@postgrespro.ru
обсуждение исходный текст
Ответ на Re: Postgres is not able to handle more than 4k tables!?  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers

On 09.07.2020 00:35, Tom Lane wrote:
> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:
>> There are several thousand clients, most of which are executing complex
>> queries.
> So, that's really the core of your problem.  We don't promise that
> you can run several thousand backends at once.  Usually it's recommended
> that you stick a connection pooler in front of a server with (at most)
> a few hundred backends.
It is not my problem - it is customer's problem.
Certainly the advice to use connection pooler is the first thing we have 
proposed to the customer when we see such larger number of active backends.
Unfortunately it is not always possible (connection pooler is not 
preseving session semantic).
This is why I have proposed builtin connection pooler for Postgres.
But it is different story.


>> So them are not able to process all this invalidation messages and their
>> invalidation message buffer is overflown.
>> Size of this buffer is hardcoded (MAXNUMMESSAGES = 4096) and can not be
>> changed without recompilation of Postgres.
>> This is problem N1.
> No, this isn't a problem.  Or at least you haven't shown a reason to
> think it is.  Sinval overruns are somewhat routine, and we certainly
> test that code path (see CLOBBER_CACHE_ALWAYS buildfarm animals).
Certainly cache overrun is not fatal.
But if most of backends are blocked in heap_open pf pg_attribute 
relation then something is not ok with Postgres, isn't it?

> It cause huge lw-lock acquisition time for heap_open and planning stage
>> of some queries is increased from milliseconds to several minutes!
> Really?

Planning time: 75698.602 ms
Execution time: 0.861 ms

>> This is problem number 2. But there is one more flaw we have faced with.
>> We have increased LOG2_NUM_LOCK_PARTITIONS to 8
>> and ... clients start to report "too many LWLocks taken" error.
>> There is yet another hardcoded constant MAX_SIMUL_LWLOCKS = 200
>> which relation with NUM_LOCK_PARTITIONS  was not mentioned anywhere.
> Seems like self-inflicted damage.  I certainly don't recall anyplace
> in the docs where we suggest that you can alter that constant without
> worrying about consequences.

Looks like you try to convince me that such practice of hardcoding 
constants in code and
not taken in account relation between them is good design pattern?
>> So looks like NUM_LOCK_PARTITIONS and MAXNUMMESSAGES  constants have to
>> be replaced with GUCs.
> I seriously doubt we'd do that.
It's a pity, because such attention is one of the reasons why Postgres 
is pgbench-oriented database showing good results at notebooks
but not at real systems running at power servers (NUMA, SSD, huge amount 
of memory, large number of cores,...).





В списке pgsql-hackers по дате отправления:

Предыдущее
От: Fabien COELHO
Дата:
Сообщение: Re: pgbench - refactor init functions with buffers
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: Resetting spilled txn statistics in pg_stat_replication