Replication for a large database

Поиск
Список
Период
Сортировка
От Michael A Nachbaur
Тема Replication for a large database
Дата
Msg-id 200305050952.33688.mike@nachbaur.com
обсуждение исходный текст
Список pgsql-sql
Hello all,

I apologize if this has already been covered in the past, but I couldn't seem 
to find an adequate solution to my problem in the archives.

I have a database that is used for a bandwidth tracking system at a broadband 
ISP.  To make a long story short, I'm inserting over 800,000 records per day 
into this database.  Suffice to say, the uptime of this database is of 
paramount importance, so I would like to have a more up-to-date backup copy 
of my database in the event of a failure (more recent than my twice-per-day 
db_dump backup).

I have two servers, both Dual Xeon-2G with 4G of RAM, and would like to 
replicate between the two.  I would like to have "live" replication, but I 
couldn't seem to find a solution for that for PostgreSQL.  I tried RServ but, 
after attempting it, I saw a mailing list posting saying that it is 
more-or-less useless for databases that have a large number of inserts (like 
mine).

When I perform a replication after a batch of data is inserted, the query runs 
literally for hours before it returns.  I have never actually been present 
during the whole replication duration since it takes longer than my 8-12 hour 
days here at work.

Is there any replication solution that would fit my needs?  I'm taking 
advantage of some PG7.2 features so "downgrading" to the 6.x version of 
postgres that has replication support isn't an option.

Thanks.

--man



В списке pgsql-sql по дате отправления:

Предыдущее
От: Ian Barwick
Дата:
Сообщение: Re: UNICODE and SQL
Следующее
От: "Ryan"
Дата:
Сообщение: Re: Replication for a large database