Re: Replication

Поиск
Список
Период
Сортировка
От Craig James
Тема Re: Replication
Дата
Msg-id 4671EF14.7000002@emolecules.com
обсуждение исходный текст
Ответ на Re: Replication  (Andreas Kostyrka <andreas@kostyrka.org>)
Ответы Re: Replication
Re: Replication
Список pgsql-performance
Andreas Kostyrka wrote:
> Slony provides near instantaneous failovers (in the single digit seconds
>  range). You can script an automatic failover if the master server
> becomes unreachable.

But Slony slaves are read-only, correct?  So the system isn't fully functional once the master goes down.

> That leaves you the problem of restarting your app
> (or making it reconnect) to the new master.

Don't you have to run a Slony app to convert one of the slaves into the master?

> 5-10MB data implies such a fast initial replication, that making the
> server rejoin the cluster by setting it up from scratch is not an issue.

The problem is to PREVENT it from rejoining the cluster.  If you have some semi-automatic process that detects the dead
serverand converts a slave to the master, and in the mean time the dead server manages to reboot itself (or its network
getsfixed, or whatever the problem was), then you have two masters sending out updates, and you're screwed. 

>> The problem is, there don't seem to be any "vote a new master" type of
>> tools for Slony-I, and also, if the original master comes back online,
>> it has no way to know that a new master has been elected.  So I'd have
>> to write a bunch of SOAP services or something to do all of this.
>
> You don't need SOAP services, and you do not need to elect a new master.
> if dbX goes down, dbY takes over, you should be able to decide on a
> static takeover pattern easily enough.

I can't see how that is true.  Any self-healing distributed system needs something like the following:

  - A distributed system of nodes that check each other's health
  - A way to detect that a node is down and to transmit that
    information across the nodes
  - An election mechanism that nominates a new master if the
    master fails
  - A way for a node coming online to determine if it is a master
    or a slave

Any solution less than this can cause corruption because you can have two nodes that both think they're master, or end
upwith no master and no process for electing a master.  As far as I can tell, Slony doesn't do any of this.  Is there a
simplersolution?  I've never heard of one. 

> The point here is, that the servers need to react to a problem, but you
> probably want to get the admin on duty to look at the situation as
> quickly as possible anyway.

No, our requirement is no administrator interaction.  We need instant, automatic recovery from failure so that the
systemstays online. 

> Furthermore, you need to checkout pgpool, I seem to remember that it has
> some bad habits in routing queries. (E.g. it wants to apply write
> queries to all nodes, but slony makes the other nodes readonly.
> Furthermore, anything inside a BEGIN is sent to the master node, which
> is bad with some ORMs, that by default wrap any access into a transaction)

I should have been more clear about this.  I was planning to use PGPool in the PGPool-1 mode (not the new PGPool-2
featuresthat allow replication).  So it would only be acting as a failover mechanism.  Slony would be used as the
replicationmechanism. 

I don't think I can use PGPool as the replicator, because then it becomes a new single point of failure that could
bringthe whole system down.  If you're using it for INSERT/UPDATE, then there can only be one PGPool server. 

I was thinking I'd put a PGPool server on every machine in failover mode only.  It would have the Slony master as the
primaryconnection, and a Slony slave as the failover connection.  The applications would route all INSERT/UPDATE
statementsdirectly to the Slony master, and all SELECT statements to the PGPool on localhost.  When the master failed,
allof the PGPool servers would automatically switch to one of the Slony slaves. 

This way, the system would keep running on the Slony slaves (so it would be read-only), until a sysadmin could get the
masterSlony back online.  And when the master came online, the PGPool servers would automatically reconnect and
write-accesswould be restored. 

Does this make sense?

Craig

В списке pgsql-performance по дате отправления:

Предыдущее
От: Andreas Kostyrka
Дата:
Сообщение: Re: Replication
Следующее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Replication