Re: BUG #16159: recovery requests WALs for the next timelines beforetimeline switch LSN has been reached

Поиск
Список
Период
Сортировка
От Pavel Suderevsky
Тема Re: BUG #16159: recovery requests WALs for the next timelines beforetimeline switch LSN has been reached
Дата
Msg-id CAEBTBzsi8HrTTSC1xwL0otzrwdefRSLuwdQ7LnAs+jQeN+EJ8A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: BUG #16159: recovery requests WALs for the next timelinesbefore timeline switch LSN has been reached  (Kyotaro Horiguchi <horikyota.ntt@gmail.com>)
Список pgsql-bugs
Hi,

Thank you very much for your explanation and sorry for delay with an answer. 
But for me it still seems that PostgreSQL has enough information to check that no WALs exist for the new timeline to omit searching all the possibly-existing WALs.
0000005300004AE1000000A3
rmgr: Standby     len (rec/tot):     62/    62, tx:          0, lsn: 4AE1/A3000028, prev 4AE1/A28EC5F8, desc: RUNNING_XACTS nextXid 82249762 latestCompletedXid 82249758 oldestRunningXid 82249759; 1 xacts: 82249759
rmgr: XLOG        len (rec/tot):    106/   106, tx:          0, lsn: 4AE1/A3000068, prev 4AE1/A3000028, desc: CHECKPOINT_SHUTDOWN redo 4AE1/A3000068; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
rmgr: XLOG        len (rec/tot):     24/    24, tx:          0, lsn: 4AE1/A30000D8, prev 4AE1/A3000068, desc: SWITCH
0000005400004AE1000000A4
rmgr: XLOG        len (rec/tot):    106/   106, tx:          0, lsn: 4AE1/A4000028, prev 4AE1/A30000D8, desc: CHECKPOINT_SHUTDOWN redo 4AE1/A4000028; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
rmgr: XLOG        len (rec/tot):     42/    42, tx:          0, lsn: 4AE1/A4000098, prev 4AE1/A4000028, desc: END_OF_RECOVERY tli 84; prev tli 83; time 2020-01-28 06:29:03.432938 CST
00000054.history
83      4AE1/A4000098   no recovery target specified
It can just look through the first received new-timeline's WAL and ensure timeline switch occured in this WAL. Finally, it can check archive for the only one possibly-existing previous WAL.

Regading influence: issue is not about the large amount of WALs to apply but in searching for the non-existing WALs on the remote storage, each such search can take 5-10 seconds while obtaining existing WAL takes milliseconds.

Regards,
Pavel Suderevsky

чт, 12 дек. 2019 г. в 06:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:
Hello.

In short, it is not a bug.

At Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <noreply@postgresql.org> wrote in
> The following bug has been logged on the website:
>
> Bug reference:      16159
> Logged by:          Pavel Suderevsky
> Email address:      psuderevsky@gmail.com
> PostgreSQL version: 11.6
> Operating system:   CentOS 7.6.1810 (3.10.0-957.el7.x86_64)
> Description:       
>
> Reproduced 11.2,11.6.
>
> If PostgreSQL starts recovery and finds a history file for a timeline that
> is higher than current one, it will request file with the segment for the
> future timeline (that most likely doesn't exist yet) and only then it will
> request file with the segment for current timeline.

The cause of the "future" timeline is that the standby has received
the history file for TLI=22 but has not completed replaying the first
checkpoint after promotion. In that case, WAL files before the
timeline switch should not exist for TLI=22 and PostgreSQL is making
sure that by peeking the archive for the file.

Since standby always starts archive recovery from the REDO location of
the last checkpoint performed on the standby(or the restart point),
the WAL amount to read is irrelevant to promotion.

> If archive is located on remote storage it can take huge time to find that
> segments for the future timelines are not exist yet and therefore recovery
> can take too long.

I don't think that peeking non-existent remote files takes comparable
amount of time to 16MB transfer. If the problem is the amount of WAL
files to transfer during recovery, I came up of three ways to make
standby startup faster.

1. For operational shutdown/restarts, make sure that the latest
 restart point is close enough to the replay location on the standby
 before shutting down. If not, manual checkpoint on the master then
 that on the standby would help. The functions pg_control_checkpoint()
 and pg_last_wal_replay_lsn() would work for checking that condition.

2. PostgreSQL 11 accepts "always" for the archive_mode GUC setting. It
 enables standby-side archiving.

https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE

3. Decrease max_wal_size or checkopint_timeout on the master, and/or
 decrease checkpoint_timeout on the standby.  This decreases the
 amount of time needed during recovery.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center

В списке pgsql-bugs по дате отправления:

Предыдущее
От: PG Bug reporting form
Дата:
Сообщение: BUG #16237: When restoring database, backend disconnects or crashes when foreign key is created
Следующее
От: Oleksii Kliukin
Дата:
Сообщение: postgres crash on concurrent update of inheritance partitioned table