Best way to replicate to large number of nodes

Поиск
Список
Период
Сортировка
От Brian Peschel
Тема Best way to replicate to large number of nodes
Дата
Msg-id 4BCF62F6.7080107@occinc.com
обсуждение исходный текст
Ответы [SPAM] Re: Best way to replicate to large number of nodes  (Ben Chobot <bench@silentmedia.com>)
Re: Best way to replicate to large number of nodes  ("Greg Sabino Mullane" <greg@turnstep.com>)
Список pgsql-general
I have a replication problem I am hoping someone has come across before
and can provide a few ideas.

I am looking at a configuration of on 'writable' node and anywhere from
10 to 300 'read-only' nodes.  Almost all of these nodes will be across a
WAN from the writable node (some over slow VPN links too).  I am looking
for a way to replicate as quickly as possible from the writable node to
all the read-only nodes.  I can pretty much guarantee the read-only
nodes will never become master nodes.  Also, the updates to the writable
node are bunched and at known times (ie only updated when I want it
updated, not constant updates), but when changes occur, there are a lot
of them at once.

We have use Slony-I for other nodes.  But these are all 1 master, 2
slave configurations (where either slave could become the master).  But
some of our admins are worried about trying to maintain a very large
size cluster (ie schema changes).

I took a look at the wiki
(http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling)
and nothing really jumped at me.  It sounded like pgpool or Mammoth
might be interesting, but I was hoping someone would have some opinions
before I randomly start trying stuff.

Thanks in advance,
Brian


В списке pgsql-general по дате отправления:

Предыдущее
От: John Gage
Дата:
Сообщение: Identical command-line command will not work with \i metacommand and filename
Следующее
От: Jonathan Vanasco
Дата:
Сообщение: trying to write a bit of logic as one query, can't seem to do it under 2