Re: BUG #16159: recovery requests WALs for the next timelinesbefore timeline switch LSN has been reached

Поиск
Список
Период
Сортировка
От Kyotaro Horiguchi
Тема Re: BUG #16159: recovery requests WALs for the next timelinesbefore timeline switch LSN has been reached
Дата
Msg-id 20191212.124856.202839248708529678.horikyota.ntt@gmail.com
обсуждение исходный текст
Ответ на BUG #16159: recovery requests WALs for the next timelines before timeline switch LSN has been reached  (PG Bug reporting form <noreply@postgresql.org>)
Ответы Re: BUG #16159: recovery requests WALs for the next timelines beforetimeline switch LSN has been reached  (Pavel Suderevsky <psuderevsky@gmail.com>)
Список pgsql-bugs
Hello.

In short, it is not a bug.

At Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in 
> The following bug has been logged on the website:
> 
> Bug reference:      16159
> Logged by:          Pavel Suderevsky
> Email address:      psuderevsky@gmail.com
> PostgreSQL version: 11.6
> Operating system:   CentOS 7.6.1810 (3.10.0-957.el7.x86_64)
> Description:        
> 
> Reproduced 11.2,11.6.
> 
> If PostgreSQL starts recovery and finds a history file for a timeline that
> is higher than current one, it will request file with the segment for the
> future timeline (that most likely doesn't exist yet) and only then it will
> request file with the segment for current timeline.

The cause of the "future" timeline is that the standby has received
the history file for TLI=22 but has not completed replaying the first
checkpoint after promotion. In that case, WAL files before the
timeline switch should not exist for TLI=22 and PostgreSQL is making
sure that by peeking the archive for the file.

Since standby always starts archive recovery from the REDO location of
the last checkpoint performed on the standby(or the restart point),
the WAL amount to read is irrelevant to promotion.

> If archive is located on remote storage it can take huge time to find that
> segments for the future timelines are not exist yet and therefore recovery
> can take too long.

I don't think that peeking non-existent remote files takes comparable
amount of time to 16MB transfer. If the problem is the amount of WAL
files to transfer during recovery, I came up of three ways to make
standby startup faster.

1. For operational shutdown/restarts, make sure that the latest
 restart point is close enough to the replay location on the standby
 before shutting down. If not, manual checkpoint on the master then
 that on the standby would help. The functions pg_control_checkpoint()
 and pg_last_wal_replay_lsn() would work for checking that condition.

2. PostgreSQL 11 accepts "always" for the archive_mode GUC setting. It
 enables standby-side archiving.

https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE

3. Decrease max_wal_size or checkopint_timeout on the master, and/or
 decrease checkpoint_timeout on the standby.  This decreases the
 amount of time needed during recovery.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



В списке pgsql-bugs по дате отправления:

Предыдущее
От: Alexander Lakhin
Дата:
Сообщение: Re: BUG #16161: pg_ctl stop fails sometimes (on Windows)
Следующее
От: PG Bug reporting form
Дата:
Сообщение: BUG #16162: create index using gist_trgm_ops leads to panic