pg_restore >10million large objects
| От | Mike Williams |
|---|---|
| Тема | pg_restore >10million large objects |
| Дата | |
| Msg-id | 8431385.HKVqUUWTfq@mahdell обсуждение исходный текст |
| Ответы |
Re: pg_restore >10million large objects
|
| Список | pgsql-admin |
Hi all,
There have been some questions about pg_dump and huge numbers of large objects
recently. I have a query about the opposite.
How can restoring a database with a lot of large objects run faster?
My database has a relatively piddling 13 million large objects, so dumping it
isn't a problem.
Restoring it is a problem though.
This is for a migration from 8.4 to 9.3. The dump is taken using pg_dump from
9.3.
I've run a test on a significantly smaller test system.
~4GB overall, and 1.1 million large objects. It took 2 hours, give or take.
The server it's on isn't especially fast though.
It seems that each "SELECT pg_catalog.lo_create('xxxxx');" is run
independently and sequentially, despite having --jobs=8 specified.
Is there any magic incantation, or animal sacrifice, I can make to get those
lo_create() calls run in parallel?
Our 9.3 production servers have 12 cores (plus HT) and SSDs, so can do many
queries at the same time.
Thanks
--
Mike Williams
В списке pgsql-admin по дате отправления: