Обсуждение: Database synchronisation over the internet...
Hi All, Availabilty is going to play an important role in some postgreSQL applications we've build, (mission critical for those who wants to know). Up until now everything has been straight forward SQL. And for backups we rely on dumps and tapes. In the near future we would like to have a system that enables synchronisation of data over the internet. The requirements are: security of data... (SSL ? in Europe) master slave concept where the data on the slave is only changed by the master and is allowed to be 10 minutes behind. What might we a good strategy for implementing such a system? -- Wim Aarts
Hi Wim, I am also working on the self same thing. I am using ssh to authenticate and encrpyt the connection. The master database uses pg_dump, since the data is not too large, to create a file which is securely copied via scp to the slave host. An ssh connection then logs into the slave,drops the table and loads the data from the dump file which has been copied over. seems to work so far, but there must be more refinements I can add. Paul On Thu, 13 Jan 2000, Wim Aarts wrote: > Hi All, > > Availabilty is going to play an important role in some postgreSQL applications > we've build, (mission critical for those who wants to know). > Up until now everything has been straight forward SQL. And for backups we rely > on dumps and tapes. In the near future we would like to have a system that > enables synchronisation of data over the internet. > The requirements are: > security of data... (SSL ? in Europe) > master slave concept where the data on the slave is only changed by the master > and is allowed to be 10 minutes behind. > > What might we a good strategy for implementing such a system? > > > -- > Wim Aarts > > > > ************ >
On Thu, 13 Jan 2000, Paul Branston wrote: > Hi Wim, > > I am also working on the self same thing. I am using ssh to authenticate > and encrpyt the connection. The master database uses pg_dump, since the > data is not too large, to create a file which is securely copied via scp > to the slave host. An ssh connection then logs into the slave,drops the > table and loads the data from the dump file which has been copied over. > > seems to work so far, but there must be more refinements I can add. use ssh to create a tunnel between server A and server B, then, from Server A, issue somethig like: psql -h ServerB -p 5432 -c "drop table <tablename>"; pg_dump tablename | psql -h ServerB -p 5432 ... sorry, haven't done this myself, so my syntax is purely a matter of giving an idea, that is why the ... at the end :) > > Paul > > > On Thu, 13 Jan 2000, Wim Aarts wrote: > > > Hi All, > > > > Availabilty is going to play an important role in some postgreSQL applications > > we've build, (mission critical for those who wants to know). > > Up until now everything has been straight forward SQL. And for backups we rely > > on dumps and tapes. In the near future we would like to have a system that > > enables synchronisation of data over the internet. > > The requirements are: > > security of data... (SSL ? in Europe) > > master slave concept where the data on the slave is only changed by the master > > and is allowed to be 10 minutes behind. > > > > What might we a good strategy for implementing such a system? > > > > > > -- > > Wim Aarts > > > > > > > > ************ > > > > > ************ > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org