Re: Add progressive backoff to XactLockTableWait functions
От | Xuneng Zhou |
---|---|
Тема | Re: Add progressive backoff to XactLockTableWait functions |
Дата | |
Msg-id | CABPTF7Wbp7MRPGsqd9NA4GbcSzUcNz1ymgWfir=Yf+N0oDRbjA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Add progressive backoff to XactLockTableWait functions (Xuneng Zhou <xunengzhou@gmail.com>) |
Ответы |
Re: Add progressive backoff to XactLockTableWait functions
|
Список | pgsql-hackers |
Hi all, I spent some extra time walking the code to see where XactLockTableWait() actually fires. A condensed recap: 1) Current call-paths A. Logical walsender (XLogSendLogical → … → SnapBuildWaitSnapshot) in cascading standby B. SQL slot functions (pg_logical_slot_get_changes[_peek]) create_logical_replication_slot pg_sync_replication_slots pg_replication_slot_advance binary_upgrade_logical_slot_has_caught_up 2) How many backends and XIDs in practice A. Logical walsenders on a cascading standby One per replication connection, capped by max_wal_senders. default 10; hubs might run 10–40. B. Logical slot creation is infrequent and bounded by max_replication_slots, default 10; other functions are not called that often either. C. Wait pattern XIDs waited-for during a snapshot build: SnapBuildWaitSnapshot wait for one xid a time; So, under today’s workloads both the number of xids and waiters stay modest concurrently. 3) Future growth Some features could multiply the number of concurrent waiters, but I don’t have enough knowledge to predict those shapes. Feedbacks welcome. Best, Xuneng
В списке pgsql-hackers по дате отправления: