Обсуждение: Who is Slony Master/Slave + general questions.
Hello, I'm starting to use slony as a redundancy solution for the project I'm currently working on. Running SuSE Linux 9 where one machine contains the prime database and the second machine contains the backup database. The Slony version I'm using is 1.1.2. If some of the issues have been addressed in the newer version of Slony, please let me know. I have looked at the Nagios scripts and others and am still left with questions regarding how to dynamically determine who is slave and who is master during normal and failover operations. Take a scenario that you want to check the state of the system without prior knowledge of the node setup, how would you determine which machine is the prime and which one is the slave? Also I'm having issues with the slonik script (below) that is supposed to handle the failover to the slave in case of master failure. For some reason it hangs and I was wondering if there are known issues with it. The test condition I'm working with is: reboot the master, the slave is supposed to take over. slonik <<_EOF_ # ---- # This defines which namespace the replication system uses # ---- cluster name = $CLUSTER; # ---- # Admin conninfo's are used by the slonik program to connect # to the node databases. So these are the PQconnectdb arguments # that connect from the administrators workstation (where # slonik is executed). # ---- node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 port=5432 user=$SLONY_USER1'; node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER2'; # ---- # Node 2 subscribes set 1 # ---- failover ( id = 1, backup node = 2); _EOF_ Thanks a lot for your help, Slawek
Hello, You should ask directly to the slony1 mailing list. sjarosz@gmail.com a écrit : > (...) The Slony version I'm using is 1.1.2. The current version of Slony1 is slony1-1.2.6. > Take a scenario that > you want to check the state of the system without prior knowledge of > the node setup, how would you determine which machine is the prime and > which one is the slave? > Without any knowledge of replication ? That will be difficult. You should connect to one of the DB and have a look at slony schema tables (sl_status and sl_listen for instance)... You may also have a look at that page : http://linuxfinances.info/info/monitoring.html > Also I'm having issues with the slonik script (below) that is supposed > to handle the failover to the slave in case of master failure. For > some reason it hangs and I was wondering if there are known issues with > it. As written in documentation "Slony-I does not provide any automatic detection for failed systems. " First of all, you may want to upgrade to the latest stable slony1 version. Cheers, SAS
>>As written in documentation "Slony-I does not provide any automatic
>>detection for failed systems. "
>>First of all, you may want to upgrade to the latest stable slony1 version.
But if you have a combination of Slony + Linux HA you can make use of Slony failover to do an automatic failover when the master node goes down.
--------------
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com)
>>detection for failed systems. "
>>First of all, you may want to upgrade to the latest stable slony1 version.
But if you have a combination of Slony + Linux HA you can make use of Slony failover to do an automatic failover when the master node goes down.
--------------
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com)
On 1/19/07, Stéphane Schildknecht <stephane.schildknecht@postgresqlfr.org > wrote:
Hello,
You should ask directly to the slony1 mailing list.
sjarosz@gmail.com a écrit :
> (...) The Slony version I'm using is 1.1.2.
The current version of Slony1 is slony1-1.2.6.
> Take a scenario that
> you want to check the state of the system without prior knowledge of
> the node setup, how would you determine which machine is the prime and
> which one is the slave?
>
Without any knowledge of replication ? That will be difficult. You
should connect to one of the DB and have a look at slony schema tables
(sl_status and sl_listen for instance)...
You may also have a look at that page :
http://linuxfinances.info/info/monitoring.html
> Also I'm having issues with the slonik script (below) that is supposed
> to handle the failover to the slave in case of master failure. For
> some reason it hangs and I was wondering if there are known issues with
> it.
As written in documentation "Slony-I does not provide any automatic
detection for failed systems. "
First of all, you may want to upgrade to the latest stable slony1 version.
Cheers,
SAS
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
I am using LinuxHA to manage the failover and Slony as part of to failover to move to the healthy node. But my question was more along the lines, if a user has access to both databases (master and slave) but does not know which one is which, how can you tell? Take a scenario: you configure 2 servers as master and slave. You walk for a period of time during which a number failovers occur. You come back. Can I query a sl_???? table to determine which server is the current master and which one is the current slave? Thank you, Slawek
sjarosz@gmail.com writes: > Take a scenario: you configure 2 servers as master and slave. You walk > for a period of time during which a number failovers occur. You come > back. Can I query a sl_???? table to determine which server is the > current master and which one is the current slave? In a sense, the question is a bad one. There is nothing about a server which inherently gives it a role as either master or slave as far as Slony-I is concerned. Nodes are just nodes. You may determine that a particular node is the origin of some particular replication set; that would indicate that, with respect to that set of tables, a particular node is "master." Look at sl_set; it contains a list of sets, and indicates, for each, which node is the origin. -- "cbbrowne","@","acm.org" http://cbbrowne.com/info/languages.html Rules of the Evil Overlord #133. "If I find my beautiful consort with access to my fortress has been associating with the hero, I'll have her executed. It's regrettable, but new consorts are easier to get than new fortresses and maybe the next one will pay attention at the orientation meeting." <http://www.eviloverlord.com/>
I dont have the replication setup on my machine right now but I guess as far as I remember you can surely check for the master and slave nodes from a Slony schema table.
------------
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com)
------------
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com)
On 19 Jan 2007 08:25:23 -0800, sjarosz@gmail.com < sjarosz@gmail.com> wrote:
I am using LinuxHA to manage the failover and Slony as part of to
failover to move to the healthy node. But my question was more along
the lines, if a user has access to both databases (master and slave)
but does not know which one is which, how can you tell?
Take a scenario: you configure 2 servers as master and slave. You walk
for a period of time during which a number failovers occur. You come
back. Can I query a sl_???? table to determine which server is the
current master and which one is the current slave?
Thank you,
Slawek
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo@postgresql.org so that your
message can get through to the mailing list cleanly
On Sat, 20 Jan 2007 11:07:57 +0500, "Shoaib Mir" <shoaibmir@gmail.com> wrote: > I dont have the replication setup on my machine right now but I guess as > far > as I remember you can surely check for the master and slave nodes from a > Slony schema table. > I think the notion of "master and slave server" is a little bit misleading here: We have sets and a node could be a origin or subscriber of them. Thinking that way, one idea to get that information is to issue SELECT a.set_id, a.set_comment, (SELECT last_value FROM _replication.sl_local_node_id) AS local_id, CASE WHEN a.set_origin = (SELECT last_value FROM _replication.sl_local_node_id) THEN TRUE ELSE FALSE END AS master_node FROM _replication.sl_set a; This gives you a result set which holds TRUE for every set the current node is an origin node for. > ------------ > Shoaib Mir > EnterpriseDB (www.enterprisedb.com) > > On 19 Jan 2007 08:25:23 -0800, sjarosz@gmail.com <sjarosz@gmail.com> > wrote: >> >> I am using LinuxHA to manage the failover and Slony as part of to >> failover to move to the healthy node. But my question was more along >> the lines, if a user has access to both databases (master and slave) >> but does not know which one is which, how can you tell? >> >> Take a scenario: you configure 2 servers as master and slave. You walk >> for a period of time during which a number failovers occur. You come >> back. Can I query a sl_???? table to determine which server is the >> current master and which one is the current slave? >> If you are using LinuxHA you have a virtual IP adress for your Cluster which points to the current active "master" on your cluster. Connecting to the master node should always happen through this IP adress, so you always "know" you are on the master when using this IP. You could then spread read operations along the IPs assigned directly to each node, "declaring" these connections read only. Bernd