Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward
От | Dimitrios Apostolou |
---|---|
Тема | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward |
Дата | |
Msg-id | n212516n-7962-9060-7oo5-rpn814q82p4r@tzk.arg обсуждение исходный текст |
Ответ на | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward (Nathan Bossart <nathandbossart@gmail.com>) |
Список | pgsql-hackers |
Hi Nathan, I've noticed you've set yourself as a reviewer of this patch in the commitfest. I appreciate it, but you might want to combine it with another simple patch [1] that speeds up the same part of pg_restore: the initial full scan on TOC-less archives. [1] https://commitfest.postgresql.org/patch/5817/ On Saturday 2025-06-14 00:04, Nathan Bossart wrote: > > On Fri, Jun 13, 2025 at 01:00:26AM +0200, Dimitrios Apostolou wrote: >> By the way, I might have set the threshold to 1MB in my program, but >> lowering it won't show a difference in my test case, since the lseek()s I >> was noticing before the patch were mostly 8-16KB forward. Not sure what is >> the defining factor for that. Maybe the compression algorithm, or how wide >> the table is? > > I may have missed it, but could you share what the strace looks like with > the patch applied? I hope you've seen my response here, with special focus on the small block size that both compressed and uncompressed custom format archives have. I have been needing to pg_restore 10TB TOC-less dumps recently, and it's a pain to do the full scan, even with both of my patches applied. Maybe the block size could be a command line option of pg_dump, so that one could set it to sizes like 100MB, which sounds like a normal block from the perspective of a 10TB gigantic dump. Regards, Dimitris
В списке pgsql-hackers по дате отправления: