Tom,
Thanks for the response, but I figured out the error is mine, not pg_dump's. In short (to minimize my embarrassment!)
don'twrite to the same file from three different pg_dumps.
The good news is running multiple pg_dumps simultaneously on a single database with exclusive coverage of different
tablesets works great, and my overall dump times have been reduced to one-fifth the time it takes to run a single
pg_dump.
BTW, I'm using PG 8.4.1, going to 8.4.8 soon, and its working great. Thanks to all for the excellent database
software.
Regards,
Bob Lunney
________________________________
From: Tom Lane <tgl@sss.pgh.pa.us>
To: Bob Lunney <bob_lunney@yahoo.com>
Cc: "pgsql-admin@postgresql.org" <pgsql-admin@postgresql.org>
Sent: Friday, July 1, 2011 2:09 PM
Subject: Re: [ADMIN] Parallel pg_dump on a single database
Bob Lunney <bob_lunney@yahoo.com> writes:
> Is it possible (or smart!) to run multiple pg_dumps simulataneously on a single database, dumping different parts of
thedatabase to different files by using table and schema exclusion? I'm attempting this and sometimes it works and
sometimeswhen I check the dump files with
> pg_restore -Fc <dumpfile> > /dev/null
> I get
> pg_restore: [custom archiver] found unexpected block ID (4) when reading data -- expected 4238
That sure sounds like a bug. What PG version are you using exactly?
Can you provide a more specific description of what you're doing,
so somebody else could reproduce this?
regards, tom lane