Sergio Gabriel Rodriguez <sgrodriguez@gmail.com> writes:
> On Thu, Oct 11, 2012 at 7:16 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> It's pretty hard to say without knowing a lot more info about your system
>> than you provided. One thing that would shed some light is if you spent
>> some time finding out where the time is going --- is the system
>> constantly I/O busy, or is it CPU-bound, and if so in which process,
>> pg_dump or the connected backend?
> the greatest amount of time is lost in I/O busy.
In that case there's not going to be a whole lot you can do about it,
probably. Or at least not that's very practical --- I assume "buy
faster disks" isn't a helpful answer.
If the blobs are relatively static, it's conceivable that clustering
pg_largeobject would help, but you're probably not going to want to take
down your database for as long as that would take --- and the potential
gains are unclear anyway.
> I never use oprofile, but for a few hours into the process, I could take
> this report:
> 1202449 56.5535 sortDumpableObjects
Hm. I suspect a lot of that has to do with the large objects; and it's
really overkill to treat them as full-fledged objects since they never
have unique dependencies. This wasn't a problem when commit
c0d5be5d6a736d2ee8141e920bc3de8e001bf6d9 went in, but I think now it
might be because of the additional constraints added in commit
a1ef01fe163b304760088e3e30eb22036910a495. I wonder if it's time to try
to optimize pg_dump's handling of blobs a bit better. But still, any
such fix probably wouldn't make a huge difference for you. Most of the
time is going into pushing the blob data around, I think.
regards, tom lane