Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Peter Eisentraut
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id 5d00346a-b565-7fde-c6da-35f39c72936a@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Greg Stark <stark@mit.edu>)
Ответы Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Список pgsql-hackers
On 1/11/18 18:23, Greg Stark wrote:
> AIUI spilling to disk doesn't affect absorbing future updates, we
> would just keep accumulating them in memory right? We won't need to
> unspill until it comes time to commit.

Once a transaction has been serialized, future updates keep accumulating
in memory, until perhaps it gets serialized again.  But then at commit
time, if a transaction has been partially serialized at all, all the
remaining changes are also serialized before the whole thing is read
back in (see reorderbuffer.c line 855).

So one optimization would be to specially keep track of all transactions
that have been serialized already and pick those first for further
serialization, because it will be done eventually anyway.

But this is only a secondary optimization, because it doesn't help in
the extreme cases that either no (or few) transactions have been
serialized or all (or most) transactions have been serialized.

> The real aim should be to try to pick the transaction that will be
> committed furthest in the future. That gives you the most memory to
> use for live transactions for the longest time and could let you
> process the maximum amount of transactions without spilling them. So
> either the oldest transaction (in the expectation that it's been open
> a while and appears to be a long-lived batch job that will stay open
> for a long time) or the youngest transaction (in the expectation that
> all transactions are more or less equally long-lived) might make
> sense.

Yes, that makes sense.  We'd still need to keep a separate ordered list
of transactions somewhere, but that might be easier if we just order
them in the order we see them.

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: Possible performance regression with pg_dump of a large number of relations
Следующее
От: Tom Lane
Дата:
Сообщение: Re: WIP: a way forward on bootstrap data