Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions
Дата
Msg-id CAA4eK1+deN8D7if-t4MZMAo=R+fCi6ud3_UB7j4J8BhawZ=Z1w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: PATCH: logical_work_mem and logical streaming of largein-progress transactions  (Dilip Kumar <dilipbalaut@gmail.com>)
Список pgsql-hackers
On Wed, May 27, 2020 at 8:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
>
> On Tue, May 26, 2020 at 7:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> >
> > Okay, sending again.
>
> While reviewing/testing I have found a couple of problems in 0005 and
> 0006 which I have fixed in the attached version.
>

I haven't reviewed the new fixes yet but I have some comments on
0008-Add-support-for-streaming-to-built-in-replicatio.patch.
1.
I think the temporary files (and or handles) used for storing the
information of changes and subxacts are getting leaked in the patch.
At some places, it is taken care to close the file but cases like
apply_handle_stream_commit where if any error occurred in
apply_dispatch(), the file might not get closed.  The other place is
in apply_handle_stream_abort() where if there is an error in ftruncate
the file won't be closed.   Now, the bigger problem is with changes
related file which is opened in apply_handle_stream_start and closed
in apply_handle_stream_stop and if there is any error in-between, we
won't close it.

OTOH, I think the worker will exit on an error so it might not matter
but then why we are at few other places we are closing it before the
error?  I think on error these temporary files should be removed
instead of relying on them to get removed next time when we receive
changes for the same transaction which I feel is what we do in other
cases where we use temporary files like for sorts or hashjoins.

Also, what if the changes file size overflows "OS file size limit"?
If we agree that the above are problems then do you think we should
explore using BufFile interface (see storage/file/buffile.c) to avoid
all such problems?

2.
apply_handle_stream_abort()
{
..
+ /* discard the subxacts added later */
+ nsubxacts = subidx;
+
+ /* write the updated subxact list */
+ subxact_info_write(MyLogicalRepWorker->subid, xid);
..
}

Here, if subxacts becomes zero, then also subxact_info_write will
create a new file and write checksum.  I think subxact_info_write
should have a check for nsubxacts > 0 before writing to the file.

3.
apply_handle_stream_commit(StringInfo s)
{
..
+ /*
+ * send feedback to upstream
+ *
+ * XXX Probably should send a valid LSN. But which one?
+ */
+ send_feedback(InvalidXLogRecPtr, false, false);
..
}

Why do we need to send the feedback at this stage after applying each
message?  If we see a non-streamed case, we never send_feedback after
each message. So, following that, I don't see the need to send it here
but if you see any specific reason then do let me know?  And if we
have to send feedback, then we need to decide the appropriate values
as well.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Martín Marqués
Дата:
Сообщение: Read access for pg_monitor to pg_replication_origin_status view
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: password_encryption default