Re: parallel pg_restore blocks on heavy random read I/O on all children processes
От | Tom Lane |
---|---|
Тема | Re: parallel pg_restore blocks on heavy random read I/O on all children processes |
Дата | |
Msg-id | 1095774.1742498237@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | parallel pg_restore blocks on heavy random read I/O on all children processes (Dimitrios Apostolou <jimis@gmx.net>) |
Ответы |
Re: parallel pg_restore blocks on heavy random read I/O on all children processes
|
Список | pgsql-performance |
Dimitrios Apostolou <jimis@gmx.net> writes: > I noticed the weird behaviour that doing a pg_restore of a huge database > dump, leads to constant read I/O (at about 15K IOPS from the NVMe drive > that has the dump file) for about one hour. I believe it happens with > any -j value>=2. > In particular, I get output like the following in the pg_restore log, only > a few seconds after running it: > pg_restore: launching item 12110 TABLE DATA yyy > pg_restore: processing data for table "public.yyy" > [ long pause ...] > pg_restore: finished item 12110 TABLE DATA yyy I am betting that the problem is that the dump's TOC (table of contents) lacks offsets to the actual data of the database objects, and thus the readers have to reconstruct that information by scanning the dump file. Normally, pg_dump will back-fill offset data in the TOC at completion of the dump, but if it's told to write to an un-seekable output file then it cannot do that. > And here is the pg_dump command which has created the dump file, executed > on PostgreSQL 16. > pg_dump -v --format=custom --compress=zstd --no-toast-compression $DBNAME | $send_to_remote_storage Yup, writing the output to a pipe would cause that ... > What do you think causes this? Is it something that can be improved? I don't see an easy way, and certainly no way that wouldn't involve redefining the archive format. Can you write the dump to a local file rather than piping it immediately? regards, tom lane
В списке pgsql-performance по дате отправления: