Re: Built-in connection pooling

Поиск
Список
Период
Сортировка
От Konstantin Knizhnik
Тема Re: Built-in connection pooling
Дата
Msg-id 6432a2a7-624b-028e-c927-b5d1ce17bf44@postgrespro.ru
обсуждение исходный текст
Ответ на Re: Built-in connection pooling  (Merlin Moncure <mmoncure@gmail.com>)
Ответы Re: Built-in connection pooling  (Merlin Moncure <mmoncure@gmail.com>)
Список pgsql-hackers

On 27.04.2018 16:49, Merlin Moncure wrote:
> On Thu, Apr 26, 2018 at 6:04 AM, Konstantin Knizhnik
> <k.knizhnik@postgrespro.ru> wrote:
>> On 25.04.2018 20:02, Merlin Moncure wrote:
>>> Yep.  The main workaround today is to disable them.  Having said that,
>>> it's not that difficult to imagine hooking prepared statement creation
>>> to a backend starting up (feature: run X,Y,Z SQL before running user
>>> queries).
>> Sorry, I do not completely understand your idea.
>> Yes, it is somehow possible to simulate session semantic by prepending all
>> session specific commands (mostly setting GUCs) to each SQL statements.
>> But it doesn't work for prepared statements: the idea of prepared statements
>> is that compilation of statement should be done only once.
> The idea is that you have arbitrary SQL that runs when after the
> backend (postgres binary) is forked from postmaster.  This would be an
> ideal place to introduce prepared statements in a way that is pooling
> compatible; you still couldn't PREPARE from the application but you'd
> be free to call already prepared statements (via SQL level EXECUTE or
> libpq PQexecPrepared()).  Of course, if somebody throws a DEALLOCATE
> or DISCARD ALL, or issues a problematic DROP x CASCADE, you'd be in
> trouble but that'a not a big deal IMO because you can control for
> those things in the application.
As far as I know in this way prepared statements can be now handled by 
pgbounce in transaction/statement pooling mode.
But from my point of view, in most cases this approach is practically 
unusable.
It is very hard to predict from the very beginning all statements 
applications will want to execute and prepare then at backend start.

>> Database performance is mostly limited by disk, so optimal number of
>> backends may be different from number of cores.
>> But certainly possibility to launch "optimal" number of backends is one of
>> the advantages of builtin session pooling.
> Sure, but some workloads are cpu limited (all- or mostly- read with
> data < memory, or very complex queries on smaller datasets).   So we
> would measure configure based one expectations exactly as is done
> today with pgbouncer.   This is a major feature of pgbouncer: being
> able to _reduce_ the number of session states relative to the number
> of connections is an important feature; it isolates your database from
> various unpleasant failure modes such as runaway memory consumption.
>
> Anyways, I'm looking at your patch.  I see you've separated the client
> connection count ('sessions') from the server backend instances
> ('backends') in the GUC.  Questions:
> *) Should non pooled connections be supported simultaneously with
> pooled connections?
> *) Should there be multiple pools with independent configurations (yes, please)?
>

Right now my prototype supports two modes:
1. All connections are polled.
2. There are several session pools, each bounded to its own port. 
Connections to the main Postgres port are normal (dedicated).
Connections to one of session pools port's are redirected to one of the 
workers of this page pool.

Please notice, that the last version of connection pooler is in 
https://github.com/postgrespro/postgresql.builtin_pool.git repository.
> *) How are you pinning client connections to an application managed
> transaction? (IMNSHO, this feature is useless without being able to do
> that)
Sorry, I do not completely understand the question.
Rescheduling is now done at transaction level - it means that backand 
can not be switched to other session until completing current transaction.
The main argument  for transaction level pooling is that it allows not 
worry about heavy weight locks, which are associated with procarray entries.

>
> FYI, it's pretty clear you've got a long road building consensus and
> hammering out a reasonable patch through the community here.  Don't
> get discouraged -- there is value here, but it's going to take some
> work.
Thank you.
I am absolutely sure that a lot of additional work has to be done before 
this prototype may become usable.

> merlin

-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: documentation is now XML
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: obsoleting plpython2u and defaulting plpythonu to plpython3u