Re: Fastest way to clone schema ~1000x

Поиск
Список
Период
Сортировка
От Emiel Mols
Тема Re: Fastest way to clone schema ~1000x
Дата
Msg-id CAF5w505X-OVN_EW9nsOYjBPLQ1auAdcDLKzreCeY5TO_YJEAtA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Fastest way to clone schema ~1000x  (Daniel Gustafsson <daniel@yesql.se>)
Ответы Re: Fastest way to clone schema ~1000x
Список pgsql-general
On Mon, Feb 26, 2024 at 3:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:
There is a measurable overhead in connections, regardless of if they are used
or not.  If you are looking to squeeze out performance then doing more over
already established connections, and reducing max_connections, is a good place
to start.

Clear, but with database-per-test (and our backend setup), it would have been *great* if we could have switched database on the same connection (similar to "USE xxx" in mysql). That would limit the connections to the amount of workers, not multiplied by tests.

Even with a pooler, we're still going to be maintaining 1000s of connections from the backend workers to the pooler. I would expect this to be rather efficient, but still unnecessary. Also, both pgbouncer/pgpool don't seem to support switching database in-connection (they could have implemented the aforementioned "USE" statement I think!). [Additionally we're using PHP that doesn't seem to have a good shared memory pool implementation -- pg_pconnect is pretty buggy].

I'll continue with some more testing. Thanks for now!

В списке pgsql-general по дате отправления:

Предыдущее
От: Daniel Gustafsson
Дата:
Сообщение: Re: Fastest way to clone schema ~1000x
Следующее
От: veem v
Дата:
Сообщение: Re: Performance issue debugging