Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAA4eK1JaKW1mj4L6DPnk-V4vXJ6hM=Kcf6+-X+93Jk56UN+kGw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Tue, Dec 24, 2019 at 10:58 AM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Thu, Dec 12, 2019 at 3:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> > I think the way invalidations work for logical replication is that
> > normally, we always start a new transaction before decoding each
> > commit which allows us to accept the invalidations (via
> > AtStart_Cache).  However, if there are catalog changes within the
> > transaction being decoded, we need to reflect those before trying to
> > decode the WAL of operation which happened after that catalog change.
> > As we are not logging the WAL for each invalidation, we need to
> > execute all the invalidation messages for this transaction at each
> > catalog change. We are able to do that now as we decode the entire WAL
> > for a transaction only once we get the commit's WAL which contains all
> > the invalidation messages.  So, we queue them up and execute them for
> > each catalog change which we identify by WAL record
> > XLOG_HEAP2_NEW_CID.
>
> Thanks for the explanation. That makes sense. But, it's still true,
> AFAICS, that instead of doing this stuff with logging invalidations
> you could just InvalidateSystemCaches() in the cases where you are
> currently applying all of the transaction's invalidations. That
> approach might be worse than changing the way invalidations are
> logged, but the two approaches deserve to be compared. One approach
> has more CPU overhead and the other has more WAL overhead, so it's a
> little hard to compare them, but it seems worth mulling over.
>

I have given some thought over it and it seems to me that this will
increase not only CPU usage but also Network usage.  The increase in
CPU usage will be for all WALSenders that decodes a transaction that
has performed DDL.  The increase in network usage comes from the fact
that we need to send the schema of relations again which doesn't
require to be invalidated.  It is because invalidation blew our local
map that remembers which relation schemas are sent.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: Restore replication settings when modifying a field type
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: pgbench - allow to create partitioned tables