Re: [HACKERS] Speedup twophase transactions

Поиск
Список
Период
Сортировка
От Stas Kelvich
Тема Re: [HACKERS] Speedup twophase transactions
Дата
Msg-id 6A5A4A3E-B1AF-4C1F-9DD3-6F553851A8CD@postgrespro.ru
обсуждение исходный текст
Ответ на Re: [HACKERS] Speedup twophase transactions  (Michael Paquier <michael.paquier@gmail.com>)
Ответы Re: [HACKERS] Speedup twophase transactions  (Michael Paquier <michael.paquier@gmail.com>)
Список pgsql-hackers

On 22 Dec 2016, at 05:35, Michael Paquier <michael.paquier@gmail.com> wrote:

True. The more spread the checkpoints and 2PC files, the more risk to
require access to disk. Memory's cheap anyway. What was the system
memory? How many checkpoints did you trigger for how many 2PC files
created?

Standard config with increased shared_buffers. I think the most significant
impact on the recovery speed here is on the client side, namely time between
prepare and commit. Right now I’m using pgbench script that issues commit
right after prepare. It’s also possible to put sleep between prepare and commit
and increase number of connections to thousands. That will be probably the
worst case — majority of prepared tx will be moved to files.

Perhaps it would be a good idea to look for the 2PC files
from WAL records in a specific order. Did you try to use
dlist_push_head instead of dlist_push_tail? This may make a difference
on systems where WAL segments don't fit in system cache as the latest
files generated would be looked at first for 2PC data.

Ouch! Good catch. I didn’t actually noticed that list populated in opposite order
with respect to traversal. I’ll fix that.

On 27 Dec 2016, at 08:33, Michael Paquier <michael.paquier@gmail.com> wrote:

Stas, have you tested as well tested the impact on recovery time when
WAL segments are very likely evicted from the OS cache? This could be
a plausible scenario if a standby instance is heavily used for
read-only transactions (say pgbench -S), and that the data quantity is
higher than the amount of RAM available. It would not be complicated
to test that: just drop_caches before beginning recovery. The maximum
amount of 2PC transactions that need to have access to the past WAL
segments is linearly related to the volume of WAL between two
checkpoints, so max_wal_size does not really matter. What matters is
the time it takes to recover the same amount of WAL. Increasing
max_wal_size would give more room to reduce the overall noise between
two measurements though.

Okay, i’ll perform such testing.

-- 
Stas Kelvich
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Etsuro Fujita
Дата:
Сообщение: Re: [HACKERS] postgres_fdw bug in 9.6
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: [HACKERS] Speedup twophase transactions