Re: Large Database Restore

Поиск
Список
Период
Сортировка
От Jim C. Nasby
Тема Re: Large Database Restore
Дата
Msg-id 20070517160417.GH14548@nasby.net
обсуждение исходный текст
Ответ на Large Database Restore  (Lee Keel <lee.keel@uai.com>)
Ответы Re: Large Database Restore  (Ron Johnson <ron.l.johnson@cox.net>)
Список pgsql-general
On Thu, May 17, 2007 at 08:19:08AM -0500, Lee Keel wrote:
> I am restoring a 51GB backup file that has been running for almost 26 hours.
> There have been no errors and things are still working.  I have turned fsync
> off, but that still did not speed things up.  Can anyone provide me with the
> optimal settings for restoring a large database?

Well, fsync=off is the biggest one. The others to look at:

maintenance_work_mem = 1GB (I generally have problems setting it over
1GB, and I'm not sure it would add much benefit)
checkpoint_timeout = 1 hour
checkpoint_segments = huge (you want to reduce the frequency of
checkpoints... probably to at least less than every 20 minutes)

Finally, if you've got a multi-CPU machine, you might want to build all
the indexes and table constraints as a separate step, and run them
through perl or something so that you'll utilize more than just one CPU.

Of course the downside to all of this is that it would mean starting
over from scratch.

Ultimately though, once your database gets past a certain size, you
really want to be using PITR and not pg_dump as your main recovery
strategy.
--
Jim Nasby                                      decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

В списке pgsql-general по дате отправления:

Предыдущее
От: Scott Marlowe
Дата:
Сообщение: Re: Memory settings, vm.overcommit, how to get it really safe?
Следующее
От: Hannes Dorbath
Дата:
Сообщение: Re: Memory settings, vm.overcommit, how to get it really safe?