Re: Having trouble with backups (was: Re: Crash Recovery)
От | Rod Taylor |
---|---|
Тема | Re: Having trouble with backups (was: Re: Crash Recovery) |
Дата | |
Msg-id | 1043424535.58142.65.camel@jester обсуждение исходный текст |
Ответ на | Having trouble with backups (was: Re: Crash Recovery) (Carlos Moreno <moreno@mochima.com>) |
Список | pgsql-performance |
On Fri, 2003-01-24 at 10:16, Carlos Moreno wrote: > Speaking about daily backups... We are running into some serious > trouble with our backup policy. > > First (and less important), the size of our backups is increasing > a lot; yet information is not changing, only being added; so, the > obvious question: is there a way to make incremental backup? Incremental backups are coming. Some folks at RedHat are working on finishing a PIT implementation, with with any luck 7.4 will do what you want. For the time being you might be able to cheat. If you're not touching the old data, it should come out in roughly the same order every time. You might be able to get away with doing a diff between the new backup and an older one, and simply store that. When restoring, you'll need to patch together the proper restore file. > And the second (and intriguing) problem: whenever I run pg_dump, > my system *freezes* until pg_dump finishes. When I say "system", No, this isn't normal -- nor do I believe it. The only explanation would be a hardware or operating system limitation. I.e. with heavy disk usage it used to be possible to peg the CPU -- making everything else CPU starved, but the advent of DMA drives put an end to that. A pg_dump is not resource friendly, simply due to the quantity of information its dealing with. Are you dumping across a network? Perhaps the NIC is maxed out. -- Rod Taylor <rbt@rbt.ca> PGP Key: http://www.rbt.ca/rbtpub.asc
Вложения
В списке pgsql-performance по дате отправления: