Re: Built-in connection pooler

Поиск
Список
Период
Сортировка
От Dimitri Fontaine
Тема Re: Built-in connection pooler
Дата
Msg-id m2a7jk8o31.fsf@laptop.tapoueh.org
обсуждение исходный текст
Ответ на Re: Built-in connection pooler  (Bruce Momjian <bruce@momjian.us>)
Ответы Re: Built-in connection pooler  (Michael Paquier <michael@paquier.xyz>)
Список pgsql-hackers
Hi,

Bruce Momjian <bruce@momjian.us> writes:
> It is nice it is a smaller patch.  Please remind me of the performance
> advantages of this patch.

The patch as it stands is mostly helpful in those situations:

  - application server(s) start e.g. 2000 connections at start-up and
    then use them depending on user traffic

    It's then easy to see that if we would only fork as many backends as
    we need, while having accepted the 2000 connections without doing
    anything about them, we would be in a much better position than when
    we fork 2000 unused backends.

  - application is partially compatible with pgbouncer transaction
    pooling mode

    Then in that case, you would need to run with pgbouncer in session
    mode. This happens when the application code is using session level
    SQL commands/objects, such as prepared statements, temporary tables,
    or session-level GUCs settings.

    With the attached patch, if the application sessions profiles are
    mixed, then you dynamically get the benefits of transaction pooling
    mode for those sessions which are not “tainting” the backend, and
    session pooling mode for the others.

    It means that it's then possible to find the most often used session
    and fix that one for immediate benefits, leaving the rest of the
    code alone. If it turns out that 80% of your application sessions
    are the same code-path and you can make this one “transaction
    pooling” compatible, then you most probably are fixing (up to) 80%
    of your connection-related problems in production.

  - applications that use a very high number of concurrent sessions

    In that case, you can either set your connection pooling the same as
    max_connection and see no benefits (and hopefully no regressions
    either), or set a lower number of backends serving a very high
    number of connections, and have sessions waiting their turn at the
    “proxy” stage.

    This is a kind of naive Admission Control implementation where it's
    better to have active clients in the system wait in line consuming
    as few resources as possible. Here, in the proxy. It could be done
    with pgbouncer already, this patch gives a stop-gap in PostgreSQL
    itself for those use-cases.

    It would be mostly useful to do that when you have queries that are
    benefiting of parallel workers. In that case, controling the number
    of active backend forked at any time to serve user queries allows to
    have better use of the parallel workers available.

In other cases, it's important to measure and accept the possible
performance cost of running a proxy server between the client connection
and the PostgreSQL backend process. I believe the numbers shown in the
previous email by Konstantin are about showing the kind of impact you
can see when using the patch in a use-case where it's not meant to be
helping much, if at all.

Regards,
--
dim


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: INSTALL file
Следующее
От: Andres Freund
Дата:
Сообщение: Re: Why are we PageInit'ing buffers in RelationAddExtraBlocks()?