Обсуждение: Best way to replicate a DB between two servers (master/slave)

Поиск
Список
Период
Сортировка

Best way to replicate a DB between two servers (master/slave)

От
Mirko Zeibig
Дата:
Hello everybody,
I know there was an announcement on www.postgresql.com, that sometime in the
future there will be a sort of replication mechanism for PostgreSQL.

Now the problem:

I have two servers, one providing content for a website (using PHP),
anotherone where users are editing the contents. I now have to update the
content-server on a regular base with the changes made in the
editing-server. I thought of dumping the whole database through ssh to a new
database on the content-server, then drop the old one and rename the new one.
I guess the content to sth. around 5MB, so having a 5Mbit leased line,
network traffic should be no problem.

I see I will run into problems, when an old postgres-process is still
connected to the database. Alternatively I thought of creating a
modification timestamp for every recordset involved and pumping only the
modificated sets to the content-server. I already have triggers running for
providing information about updated/inserted recordsets. But what about
deleted ones? I guess best would be to collect information about these in a
seperate table and delete the ones on the content-server based on this table.

Does anyone know of a more sensible way to get replication?

Best Regards
Mirko

Re: Best way to replicate a DB between two servers (master/slave)

От
Sandeep Joshi
Дата:
Hi,
   Is there anyway of turning off triggers dynamically?

Sandeep


> Hello everybody,
> I know there was an announcement on www.postgresql.com, that sometime in the
> future there will be a sort of replication mechanism for PostgreSQL.
>
> Now the problem:
>
> I have two servers, one providing content for a website (using PHP),
> anotherone where users are editing the contents. I now have to update the
> content-server on a regular base with the changes made in the
> editing-server. I thought of dumping the whole database through ssh to a new
> database on the content-server, then drop the old one and rename the new one.
> I guess the content to sth. around 5MB, so having a 5Mbit leased line,
> network traffic should be no problem.
>
> I see I will run into problems, when an old postgres-process is still
> connected to the database. Alternatively I thought of creating a
> modification timestamp for every recordset involved and pumping only the
> modificated sets to the content-server. I already have triggers running for
> providing information about updated/inserted recordsets. But what about
> deleted ones? I guess best would be to collect information about these in a
> seperate table and delete the ones on the content-server based on this table.
>
> Does anyone know of a more sensible way to get replication?
>
> Best Regards
> Mirko

RE: Best way to replicate a DB between two servers (master/slave)

От
"Rob Arnold"
Дата:
I'm in the process of writing this functionality in a perl script.  E-mail
me if you are interested in helping me develop/debug these tools.  They are
rather beta at this time.

Features/Limits:

One Master - Many Slaves

Optional Bi-directional replication (synchronization)

Client can be written for another platform (i.e. I have one for MS-Access)

No support for Referential Integrity at this time.

Knowledge of perl required.

--rob


----------
From:  Mirko Zeibig [SMTP:mirko@picard.inka.de]
Sent:  Wednesday, December 20, 2000 6:18 PM
To:  Postgres Mailing List
Subject:  Best way to replicate a DB between two servers (master/slave)

Hello everybody,
I know there was an announcement on www.postgresql.com, that sometime in the
future there will be a sort of replication mechanism for PostgreSQL.

Now the problem:

I have two servers, one providing content for a website (using PHP),
anotherone where users are editing the contents. I now have to update the
content-server on a regular base with the changes made in the
editing-server. I thought of dumping the whole database through ssh to a new
database on the content-server, then drop the old one and rename the new
one.
I guess the content to sth. around 5MB, so having a 5Mbit leased line,
network traffic should be no problem.

I see I will run into problems, when an old postgres-process is still
connected to the database. Alternatively I thought of creating a
modification timestamp for every recordset involved and pumping only the
modificated sets to the content-server. I already have triggers running for
providing information about updated/inserted recordsets. But what about
deleted ones? I guess best would be to collect information about these in a
seperate table and delete the ones on the content-server based on this
table.

Does anyone know of a more sensible way to get replication?

Best Regards
Mirko

Вложения

Re: Best way to replicate a DB between two servers (master/slave)

От
Martijn van Oosterhout
Дата:
Mirko Zeibig wrote:
>
> Hello everybody,
> I know there was an announcement on www.postgresql.com, that sometime in the
> future there will be a sort of replication mechanism for PostgreSQL.
>
> Now the problem:
>
> I have two servers, one providing content for a website (using PHP),
> anotherone where users are editing the contents. I now have to update the
> content-server on a regular base with the changes made in the
> editing-server. I thought of dumping the whole database through ssh to a new
> database on the content-server, then drop the old one and rename the new one.
> I guess the content to sth. around 5MB, so having a 5Mbit leased line,
> network traffic should be no problem.

We do that here too. We have an internal server and an external server
and we update the external server regularly.

So we have a script called copytoexternal which does:

1. Dumps the required tables to a tar.gz (not all tables are copied out)
2. Copies the tar.gz externally
3. Unpacks the tar.gz into a new database (same server) called "temp"
4. Vacuums the new temp DB
5. Then quickly, in one step:
  a. Stops the external DB server
  b. Switches the directories in the $PGDATA directory
  c. Restarts the postmaster
6. Delete the old, now replaced external DB.

The whole process takes about 15 minutes but the external DB is only
out for 10 seconds or so. You can run it anytime really.

We've found this quite reliable.

Hope this helps...
--
Martijn van Oosterhout <kleptog@cupid.suninternet.com>
http://cupid.suninternet.com/~kleptog/