RE: [GENERAL] Replication of databases (esp. postgres)

Поиск
Список
Период
Сортировка
От Thomas Antepoth
Тема RE: [GENERAL] Replication of databases (esp. postgres)
Дата
Msg-id Pine.LNX.3.96.990218060951.1950A-100000@ns.c-c.de
обсуждение исходный текст
Ответ на RE: [GENERAL] Replication of databases (esp. postgres)  (Michael Davis <michael.davis@prevuenet.com>)
Список pgsql-general
Michael,

On Wed, 17 Feb 1999, Michael Davis wrote:

> I was thinking about creating a trigger on every table that would write an
> insert, update, or delete statement into a log file.  Each replicated
> database would pro-actively read these and post them.

Right. This is one of the prerequesites to avoid a timestamp orientated
replication mechanism, which is known to be vulnerable to the lost
update phenomenon. (e.g. "newnews" on newsservers)

But even if you do that, you have to keep the relations between the
tables intact creating a really unique recordid for each record
in each database.

As Dustin pointed out, he solved it by merging a unique database
id with a sequence into a unique primary key.

This solution is only a partially one because of the insert
into a relation from database 2 into a relation initially
created by database 1.

If the newly inserted record gets the database id of database 2
it will later not be recognized belonging to the relation by a
"... where db_id=this_relations_host and recid=this_relations_sequence"
on every host.

If the newly inserted record gets the database id of database 1
it will be surely regocnized, but the sequence number of the
record may collide with a pre existing record in database 1
when replicated.

Multiple updates to a record of the same relation on different
non connected databases are another problem not solved by
this mechanism.

Every solution which i can imagine relies on a master, which
receives temporary data on inserts, creates unique record ids
and re exports the data to the slaves.

A slave will create in his local database a record id which is
known to be "not part of all databases". e.g. all numbers below
100000 are temporary.

Then it exports this data to the master and deletes the temporary
data, when the master acknowledged his import.

The master creates unique record ids among all temporary data
and reexports the data to all his slaves.

But what to do, if there are several local masters?

t++



В списке pgsql-general по дате отправления:

Предыдущее
От: Vadim Mikheev
Дата:
Сообщение: Re: [GENERAL] slow inserts and updates on large tables
Следующее
От: Clark Evans
Дата:
Сообщение: Re: [GENERAL] Replication of databases (esp. postgres)