Re: Linux ready for high-volume databases?

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: Linux ready for high-volume databases?
Дата
Msg-id 87smnnn2em.fsf@stark.dyndns.tv
обсуждение исходный текст
Ответ на Re: Linux ready for high-volume databases?  (Dennis Gearon <gearond@fireserve.net>)
Ответы Re: Linux ready for high-volume databases?  (Ron Johnson <ron.l.johnson@cox.net>)
Re: Linux ready for high-volume databases?  (Andrew Sullivan <andrew@libertyrms.info>)
Список pgsql-general
Dennis Gearon <gearond@fireserve.net> writes:

> With the low cost of disks, it might be a good idea to just copy to disks, that
> one can put back in.

Uh, sure, using hardware raid 1 and breaking one set of drives out of the
mirror to perform the backup is an old trick. And for small databases backups
are easy that way. Just store a few dozen copies of the pg_dump output on your
live disks for local backups and burn CD-Rs for offsite backups.

But when you have hundreds of gigabytes of data and you want to be able to
keep multiple snapshots of your database both on-site and off-site... No, you
can't just buy another hard drive and call it a business continuity plan.

As it turns out my current project will be quite small. I may well be adopting
the first approach. I'm thinking taking a pg_dump regularly (nightly if I can
get away with doing it that infrequently) keeping the past n dumps, and
burning a CD with those dumps.

This doesn't provide what online backups do, of recovery to the minute of the
crash. And I get nervous having only logical pg_dump output, no backups of the
actual blocks on disk. But is that what everybody does?

--
greg

В списке pgsql-general по дате отправления:

Предыдущее
От: Greg Stark
Дата:
Сообщение: Re: move to usenet?
Следующее
От: "Shridhar Daithankar"
Дата:
Сообщение: Re: Linux ready for high-volume databases?