Обсуждение: Replicating only a particular database - Londiste, or Bucardo
I have been using Slony-I for PG replication of a particular database in a cluster, while a second database is local andnot replicated. This works fine for small number of nodes. But as we are moving to scale - say around 30 nodes, we areseeing that Slony-I replication will not be able take the load - guess since now it would be maintaining a mess of 30nodes. PG9 logshipping replication is out of question because it works on entire DB set. So I looked at Londiste, Bucardo. Hereare my requirements I would like to know if either of the two would suit well - nodes are added on the fly, so first it starts with one node and then some network admin comes and adds another node toform a Publisher-Subscriber node and then adds more nodes(or drops nodes) to have multiple Subscribers. it is a master-slavereplication - promote a subscriber node to become the Publisher(Slony-I does it slow here as it has to figure out which one in the messhas the latest copy and then finish some replication) - if a node is not replicated for sometime(6 hours for Slony-I) it is dropped from the cluster We have scripts for Add, Drop, Promote and Reset(a single node when it fails/we want to join it back) So my scripts have to be modified for the new replication model, but will I be able to achieve all the above with Londiste,or Bucardo. Or else, is there any better thing which somebody is already using with a model like this ? Thanks, Lalit
On Wed, Mar 16, 2011 at 02:32:05PM +0530, lalit@avendasys.com wrote: > I have been using Slony-I for PG replication of a particular > database in a cluster, while a second database is local and > not replicated. This works fine for small number of nodes. > But as we are moving to scale - say around 30 nodes, we are > seeing that Slony-I replication will not be able take the load - > guess since now it would be maintaining a mess of 30 nodes. Yes. Part of the problem with Slony and that number of nodes is all the cross-node communication needed. You can remove all of the paths except the direct master->slave ones, but then you can't use Slony for things like failover, IIRC. > PG9 logshipping replication is out of question because it works > on entire DB set. So I looked at Londiste, Bucardo. Here are my > requirements I would like to know if either of the two would suit well > > - nodes are added on the fly, so first it starts with one node and > then some network admin comes and adds another node to form a > Publisher-Subscriber node and then adds more nodes(or drops nodes) > to have multiple Subscribers. it is a master-slave replication Everything (Slony, Bucardo, Londiste) should be able to handle this > - promote a subscriber node to become the Publisher (Slony-I does > it slow here as it has to figure out which one in the mess has the > latest copy and then finish some replication) Londiste is pretty much the same as Slony as far as most of these questions. All can do this as well, although the Bucardo way is quite different > - if a node is not replicated for sometime(6 hours for Slony-I) > it is dropped from the cluster You mean if it is not reachable at all? That will not work well with Bucardo. By "cluster" do you mean the group of slaves? > We have scripts for Add, Drop, Promote and Reset(a single > node when it fails/we want to join it back) > > So my scripts have to be modified for the new replication model, > but will I be able to achieve all the above with Londiste, or > Bucardo. Or else, is there any better thing which somebody is > already using with a model like this ? It's still not entirely clear what your model is. If you have a database that needs to be replicated, why not put it in its own cluster and use PG9? Is all of this only for read-only load balancing? Under what conditions would a slave become a master? If you don't get much response on this list (which is not quite the right one for this question), try pgsql-general@postgresql.org -- Greg Sabino Mullane greg@endpoint.com End Point Corporation PGP Key: 0x14964AC8
Вложения
Hi, Thanks for the reply, please see my responses inline >> We have scripts for Add, Drop, Promote and Reset(a single >> node when it fails/we want to join it back) >> >> So my scripts have to be modified for the new replication model, >> but will I be able to achieve all the above with Londiste, or >> Bucardo. Or else, is there any better thing which somebody is >> already using with a model like this ? > It's still not entirely clear what your model is. If you have a > database that needs to be replicated, why not put it in its own > cluster and use PG9? Is all of this only for read-only load > balancing? Under what conditions would a slave become a master Our model is like this - my server appplication goes in as an network appliance, where I use postgres for DB and multiple such boxes can be joined to form a cluster. In each node there are two databases - a config db(which should get replicated in a cluster setup) and a sessions db(which is local and not replicated). When we setup a cluster, the Publisher has r+w on config db, and the Subsriber nodes are read-only slaves for config db. A Subscriber node can be promoted to a Publisher(say when the original Publisher goes down). this does not need to happen by itself(not failover), but it is a separate cluster operations that we have which sys admins have to use manually - and it does not matter if Publisher is down or not. >> - if a node is not replicated for sometime(6 hours for Slony-I) >> it is dropped from the cluster > You mean if it is not reachable at all? That will not work well > with Bucardo. By "cluster" do you mean the group of slaves? yeah i did mean if the node is not reachable, and it has not replicated since last N hours. There is a cluster_servers table which has a last_replication col updated by a Slony-I hook and there is a cron (on the Publisher) that runs and checks this col, if the the replication delay is more than 6 hours, it drops the node from the cluster, which is the master+slave nodes > Londiste is pretty much the same as Slony as far as most of these > questions. All can do this as well, although the Bucardo way is > quite different I am more familiar with python so bit of of inclined towards taking a look at the Londiste approach, add/drop operations are fine, but my main concern is the promote case - as i expect there can be some data loss, but wanted to know will Londiste/Bucardo make sure that that is the least. In Londiste, I see they use a ticker on the Provider, does it mean that after every tick the data should have been pushed to all the slave nodes ? Thanks, Lalit