I'm in the process of writing this functionality in a perl script. E-mail
me if you are interested in helping me develop/debug these tools. They are
rather beta at this time.
Features/Limits:
One Master - Many Slaves
Optional Bi-directional replication (synchronization)
Client can be written for another platform (i.e. I have one for MS-Access)
No support for Referential Integrity at this time.
Knowledge of perl required.
--rob
----------
From: Mirko Zeibig [SMTP:mirko@picard.inka.de]
Sent: Wednesday, December 20, 2000 6:18 PM
To: Postgres Mailing List
Subject: Best way to replicate a DB between two servers (master/slave)
Hello everybody,
I know there was an announcement on www.postgresql.com, that sometime in the
future there will be a sort of replication mechanism for PostgreSQL.
Now the problem:
I have two servers, one providing content for a website (using PHP),
anotherone where users are editing the contents. I now have to update the
content-server on a regular base with the changes made in the
editing-server. I thought of dumping the whole database through ssh to a new
database on the content-server, then drop the old one and rename the new
one.
I guess the content to sth. around 5MB, so having a 5Mbit leased line,
network traffic should be no problem.
I see I will run into problems, when an old postgres-process is still
connected to the database. Alternatively I thought of creating a
modification timestamp for every recordset involved and pumping only the
modificated sets to the content-server. I already have triggers running for
providing information about updated/inserted recordsets. But what about
deleted ones? I guess best would be to collect information about these in a
seperate table and delete the ones on the content-server based on this
table.
Does anyone know of a more sensible way to get replication?
Best Regards
Mirko