Re: [HACKERS] Error while copying a large file in pg_rewind

Поиск
Список
Период
Сортировка
От Michael Paquier
Тема Re: [HACKERS] Error while copying a large file in pg_rewind
Дата
Msg-id CAB7nPqS+v-XZAY-9sTKztp6jMBEPCqry_HR5ffG6H8gt5aBz_A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Error while copying a large file in pg_rewind  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: [HACKERS] Error while copying a large file in pg_rewind  (Kuntal Ghosh <kuntalghosh.2007@gmail.com>)
Список pgsql-hackers
On Tue, Jul 4, 2017 at 4:27 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
>> On 7/3/17 09:53, Tom Lane wrote:
>>> Hm.  Before we add a bunch of code to deal with that, are we sure we
>>> *want* it to copy such files?  Seems like that's expending a lot of
>>> data-transfer work for zero added value --- consider e.g. a server
>>> with a bunch of old core files laying about in $PGDATA.  Given that
>>> it's already excluded all database-data-containing files, maybe we
>>> should just set a cap on the plausible size of auxiliary files.
>
>> It seems kind of lame to fail on large files these days, even if they
>> are not often useful in the particular case.
>
> True.  But copying useless data is also lame.

We don't want to complicate pg_rewind code with filtering
capabilities, so if the fix is simple I think that we should include
it and be done. That will avoid future complications as well.

>> Also, most of the segment and file sizes are configurable, and we have
>> had reports of people venturing into much larger file sizes.
>
> But if I understand the context correctly, we're not transferring relation
> data files this way anyway.  If we do transfer WAL files this way, we
> could make sure to set the cutoff larger than the WAL segment size.

WAL segments are not transferred. Only the WAL data of the the target
data folder is gone through to find all the blocks that have been
touched from the last checkpoint before WAL forked.

Now, I think that this is broken for relation files higher than 2GB,
see fetch_file_range where the begin location is an int32.
-- 
Michael



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during"make check"
Следующее
От: Mark Rofail
Дата:
Сообщение: Re: [HACKERS] GSoC 2017: Foreign Key Arrays