30.01.2013, 18:47, "Jeff Janes" <jeff.janes@gmail.com>:
> On Tue, Jan 29, 2013 at 9:23 PM, Groshev Andrey <greenx@yandex.ru> wrote:
>
>> Hello!
>> I update the databases to version 9.1.
>> Today, faced with a new challenge.
>> The database itself is not very big 60G, but has ~ 164,000 tables in 1260 schemes.
>
> Are the schemas all about the same size (in number of tables)?
Yes, 130 tables, 120 triggers, 700 functions.
Only some schemas different from them.
> There are several places in the code which are quadratic in the number
> of tables being dumped and restored. There have been many fixes
> recently in 9.2 and especially (future) 9.3, but unfortunately you
> have to upgrade in order to get those benefits.
>
>> I tried and pg_upgrade and pg_dumpall and pg_dump.
>> But they all work very, very long time.
>> For so long that I do not have patience.
>> And pg_dump worked for almost a day, gave "out off memory"
>
> Was it exactly that, or was it about shared memory for holding locks?
> ...
Unfortunately, I can't say anything about memory problems.
This post was my colleague.
In my attempt to not wait ... but I waited 17 hours.
>> The only thing that came up while doing a dump on each schema.
>> But will it be the right approach?
>
> Unfortunately, that might be your best option to get around the
> quadratic behavior.
Yes, some string in shell script.
Before me, this problem is solved as well.
> You would probably want to use the pg_dump from 9.2, as there are
> improvements in that version of pg_dump to speed up partial dumps.
> You can use pg_dump from 9.2 against server 9.0 and still get the
> improvements. But that means you should be upgrading to 9.2 rather
> than 9.1. (Which you should probably do anyway unless you have a
> specific reason not to.)
>
> Cheers,
>
> Jeff
Now think about it. Try to make 9.0 -> 9.2 -> 9.1 ?
Thank you!