Re: [HACKERS] Error while copying a large file in pg_rewind

Поиск
Список
Период
Сортировка
От Kuntal Ghosh
Тема Re: [HACKERS] Error while copying a large file in pg_rewind
Дата
Msg-id CAGz5QCK0GRuChjfpi3bn2tGPZ4L_-8o2meRgQF9z86Zaq7xHbA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Error while copying a large file in pg_rewind  (Michael Paquier <michael.paquier@gmail.com>)
Список pgsql-hackers
On Tue, Jul 4, 2017 at 4:12 AM, Michael Paquier
<michael.paquier@gmail.com> wrote:
> On Tue, Jul 4, 2017 at 4:27 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
>>> On 7/3/17 09:53, Tom Lane wrote:
>>>> Hm.  Before we add a bunch of code to deal with that, are we sure we
>>>> *want* it to copy such files?  Seems like that's expending a lot of
>>>> data-transfer work for zero added value --- consider e.g. a server
>>>> with a bunch of old core files laying about in $PGDATA.  Given that
>>>> it's already excluded all database-data-containing files, maybe we
>>>> should just set a cap on the plausible size of auxiliary files.
>>
>>> It seems kind of lame to fail on large files these days, even if they
>>> are not often useful in the particular case.
>>
>> True.  But copying useless data is also lame.
>
> We don't want to complicate pg_rewind code with filtering
> capabilities, so if the fix is simple I think that we should include
> it and be done. That will avoid future complications as well.
>
Yeah, I agree. In the above case, it's a core dump file. So, copying
it to master seems to be of no use. But, even if we add some filtering
capabilities, it's difficult to decide which files to skip and which
files to copy.

>>> Also, most of the segment and file sizes are configurable, and we have
>>> had reports of people venturing into much larger file sizes.
>>
>> But if I understand the context correctly, we're not transferring relation
>> data files this way anyway.  If we do transfer WAL files this way, we
>> could make sure to set the cutoff larger than the WAL segment size.
>
> WAL segments are not transferred. Only the WAL data of the the target
> data folder is gone through to find all the blocks that have been
> touched from the last checkpoint before WAL forked.
>
> Now, I think that this is broken for relation files higher than 2GB,
> see fetch_file_range where the begin location is an int32.
> --
Okay. So, if the relation block size differs by 2GB or more between
the source and target directory, we've a problem.



-- 
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Amit Kapila
Дата:
Сообщение: Re: [HACKERS] Request more documentation for incompatibility ofparallelism and plpgsql exec_run_select
Следующее
От: Kuntal Ghosh
Дата:
Сообщение: Re: [HACKERS] Error while copying a large file in pg_rewind