Обсуждение: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

Поиск
Список
Период
Сортировка

PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
snoop@email.it
Дата:
Hi everybody,
I'm trying to figure out a way to setup a PostgreSQL HA cluster solution.

For the record:
- I've already tried pgpool-II 2.x in Synchronous Multi Master Replication
mode and I was satisfied by it's functionality but concerned about having
the same data growing on several nodes
- I've upgraded to pgpool-II 3.0.x on FreeBSD (from ports) but it's very
buggy at the moment
- I don't like the idea of having fixed size (16 megs regardless of the
committed transaction number!) WAL logs often "shipped" from one node to
another endangering my network performance (asynchronous replication)

I've done some research and I've an idea of different possible solutions,
but I'd honestly like to implement it using CARP in a "Shared Disk Failover"
fashion.
Unfortunately this doesn't really seem to be a common way according to the
very limited information available on the net and that's why I'm going to
ask here.

My idea: two nodes (i386) with FreeBSD 8.1 and PostgreSQL 9.0.2, CARP
providing network failover and a shared data dir on a RAIDZ solution. I'm
pretty sure that CARP would do the job properly indirectly avoiding even the
dangerous writing on the data dir from both nodes at the same time (that
would apparently badly screw up the DB) by redirecting any network
connection to the active DB and to him only.

BUT ... I'm seriously concerned about the already active connections client
<-> server during the failover.
Example:

client A connects to server A
server A fails so does the client A connection
CARP redirects any upcoming connection to the DB to server B now
client A reconnects and is now operating on server B
THEN
server A comes back up
CARP now obviously redirects any new connection to the DB to server A again
client B connects to server A
what about the existing connection of the client A to the server B? there's
an existing connection state between client A and server B
now there's the chance that a transaction can be committed on the server B
while there's someone else operating on server A too!

I understand that in a server that doesn't commit many transaction but is
mainly answering queries this could be a remote situation but it can
probably happen.
Please correct me if I'm wrong (as I really hope to be).

Did anyone here try such a configuration by any chance?

Many thanks in advance for your time.
 --
 Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP
autenticato? GRATIS solo con Email.it: http://www.email.it/f

 Sponsor:
 Paghe e stipendi, consulenza e collocamento, tutto con Emailpaghe! Provalo
gratuitamente fino al 31/12/2010
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=10682&d=20101221



Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Scott Marlowe
Дата:
On Mon, Dec 20, 2010 at 6:23 PM,  <snoop@email.it> wrote:
> Hi everybody,
> I'm trying to figure out a way to setup a PostgreSQL HA cluster solution.
>
> For the record:
> - I've already tried pgpool-II 2.x in Synchronous Multi Master Replication
> mode and I was satisfied by it's functionality but concerned about having
> the same data growing on several nodes

What actual concerns did you have?  Just a question of too many
spinning disks in your machines or something?  There are some issues
with race conditions and such with statement level replication to be
aware of.

> - I've upgraded to pgpool-II 3.0.x on FreeBSD (from ports) but it's very
> buggy at the moment
> - I don't like the idea of having fixed size (16 megs regardless of the
> committed transaction number!) WAL logs often "shipped" from one node to
> another endangering my network performance (asynchronous replication)

Streaming replication in 9.0 doesn't really work that way, so you
could use that now with a hot standby ready to be failed over to as
needed.

> I've done some research and I've an idea of different possible solutions,
> but I'd honestly like to implement it using CARP in a "Shared Disk Failover"
> fashion.
> Unfortunately this doesn't really seem to be a common way according to the
> very limited information available on the net and that's why I'm going to
> ask here.

Yeah, there's a lot of things you have to be careful of to prevent
data corruption.  If two postgresql instances mount and start on the
same data share, you will immediately have a corrupted data store and
have to restore from backup.

> My idea: two nodes (i386) with FreeBSD 8.1 and PostgreSQL 9.0.2, CARP
> providing network failover and a shared data dir on a RAIDZ solution.

Only one at a time.  Ever.  So you'll need fencing software / hardware
for your shared data drives.

> I'm
> pretty sure that CARP would do the job properly indirectly avoiding even the
> dangerous writing on the data dir from both nodes at the same time (that
> would apparently badly screw up the DB) by redirecting any network
> connection to the active DB and to him only.

You'll need more than carp.

> BUT ... I'm seriously concerned about the already active connections client
> <-> server during the failover.
> Example:
>
> client A connects to server A
> server A fails so does the client A connection
> CARP redirects any upcoming connection to the DB to server B now
> client A reconnects and is now operating on server B
> THEN
> server A comes back up

Stop.  FULL STOP.  If A goes down, you need to STONITH (Shoot the
other node in the head) so it cannot under any circumstances come back
up on accident.  It's a good idea to have either or both fencing for
your network switch and a power switch you can use a signal to to
power down server A.

> Did anyone here try such a configuration by any chance?

There are people on the list using multiple machines pointing at
single storage arrays with fencing and STONITH technology.  I'm not,
we just use slony and manual failover for our stuff.

Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
snoop@email.it
Дата:
> On Mon, Dec 20, 2010 at 6:23 PM,  <snoop@email.it> wrote:
> > Hi everybody,
> > I'm trying to figure out a way to setup a PostgreSQL HA cluster
solution.
> >
> > For the record:
> > - I've already tried pgpool-II 2.x in Synchronous Multi Master
Replication
> > mode and I was satisfied by it's functionality but concerned about
having
> > the same data growing on several nodes
>
> What actual concerns did you have?  Just a question of too many
> spinning disks in your machines or something?  There are some issues
> with race conditions and such with statement level replication to be
> aware of.

Well, I'd prefer a big single storage instead of "too many spinning disks"
around mainly for maintenance reasons, to avoid replication and too much
network traffic.
Thanks for the "race conditions" tip.

>
> > - I've upgraded to pgpool-II 3.0.x on FreeBSD (from ports) but it's very
> > buggy at the moment
> > - I don't like the idea of having fixed size (16 megs regardless of the
> > committed transaction number!) WAL logs often "shipped" from one node to
> > another endangering my network performance (asynchronous replication)
>
> Streaming replication in 9.0 doesn't really work that way, so you
> could use that now with a hot standby ready to be failed over to as
> needed.

Mmm, so I can use an hot standby setup without any need for replication
(same data dir) and no need for STONITH?
Sorry if my questions sound trivial to you but my experience with PostgreSQL
is quite limited and this would be my first "more complex" configuration and
I'm trying to figure out the best way to go. Unfortunately it's not that
easy to figure it out going through the documentation only.

>
> > I've done some research and I've an idea of different possible
solutions,
> > but I'd honestly like to implement it using CARP in a "Shared Disk
Failover"
> > fashion.
> > Unfortunately this doesn't really seem to be a common way according to
the
> > very limited information available on the net and that's why I'm going
to
> > ask here.
>
> Yeah, there's a lot of things you have to be careful of to prevent
> data corruption.  If two postgresql instances mount and start on the
> same data share, you will immediately have a corrupted data store and
> have to restore from backup.
>
> > My idea: two nodes (i386) with FreeBSD 8.1 and PostgreSQL 9.0.2, CARP
> > providing network failover and a shared data dir on a RAIDZ solution.
>
> Only one at a time.  Ever.  So you'll need fencing software / hardware
> for your shared data drives.
>
> > I'm
> > pretty sure that CARP would do the job properly indirectly avoiding even
the
> > dangerous writing on the data dir from both nodes at the same time (that
> > would apparently badly screw up the DB) by redirecting any network
> > connection to the active DB and to him only.
>
> You'll need more than carp.
>
> > BUT ... I'm seriously concerned about the already active connections
client
> > <-> server during the failover.
> > Example:
> >
> > client A connects to server A
> > server A fails so does the client A connection
> > CARP redirects any upcoming connection to the DB to server B now
> > client A reconnects and is now operating on server B
> > THEN
> > server A comes back up
>
> Stop.  FULL STOP.  If A goes down, you need to STONITH (Shoot the
> other node in the head) so it cannot under any circumstances come back
> up on accident.  It's a good idea to have either or both fencing for
> your network switch and a power switch you can use a signal to to
> power down server A.
>
> > Did anyone here try such a configuration by any chance?
>
> There are people on the list using multiple machines pointing at
> single storage arrays with fencing and STONITH technology.  I'm not,
> we just use slony and manual failover for our stuff.
 --
 Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP
autenticato? GRATIS solo con Email.it: http://www.email.it/f

 Sponsor:
 MisterCupido.com crea i tuoi regali personalizzati ai prezzi più bassi del
web... e questa settimana ci sono più sconti che mai!
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=11031&d=20101221



Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Scott Marlowe
Дата:
On Mon, Dec 20, 2010 at 8:03 PM,  <snoop@email.it> wrote:
>> On Mon, Dec 20, 2010 at 6:23 PM,  <snoop@email.it> wrote:
> Well, I'd prefer a big single storage instead of "too many spinning disks"
> around mainly for maintenance reasons, to avoid replication and too much
> network traffic.

Sure, but that then makes your single point of failure your storage
system.  With no replication the best you can hope for if the storage
array fails completely is to revert to a recent backup, since without
any streaming replication you'll be missing tons of data if you've got
many updates.  If your database is mostly static or the changes can be
recreated from other sources that's not so bad.  However, if you have
one storage array and it has a carastrophic hardware failure and goes
down and stay down, then you'll need something else to hold the db
while you wait for parts etc.

>> > - I don't like the idea of having fixed size (16 megs regardless of the
>> > committed transaction number!) WAL logs often "shipped" from one node to
>> > another endangering my network performance (asynchronous replication)

(P.s. that was pre 9.0 PITR...)

>> Streaming replication in 9.0 doesn't really work that way, so you
>> could use that now with a hot standby ready to be failed over to as
>> needed.
>
> Mmm, so I can use an hot standby setup without any need for replication
> (same data dir) and no need for STONITH?
> Sorry if my questions sound trivial to you but my experience with PostgreSQL
> is quite limited and this would be my first "more complex" configuration and
> I'm trying to figure out the best way to go. Unfortunately it's not that
> easy to figure it out going through the documentation only.

No no.  With streaming replication the master streams changes to the
slave(s) in real time, not via copying WAL files like in previous PITR
replication.  No need for STONITH and / or fencing since the master
database writes to the slave database.  Failover would be provided by
whatever monitoring script you want to throw at the system, maybe with
pgpool, pgbouncer, or even CARP if you wanted (I'm no fan of carp, had
a lot of problems with it and CISCO switches a while back).  Cut off
the main server,  put slave server into recovery, when recovery
finishes and it's up and running, change the IP in the app config,
bounce app and keep going.  With slony you'd do something similar, but
use slonik commands to promote the first slave to the master where
you'd bring up the slave in recovery mode in the streaming replication
method.

All of that assumes two machines with their own storage. (technically
you could put it all on the same big array in different directories).

If you want to share the same data dir then you HAVE to make sure that
only one machine at a time can ever open that directory and start
postgresql there.  Two postmasters on the same data dir are instant
death.

Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
snoop@email.it
Дата:
 > On Mon, Dec 20, 2010 at 8:03 PM,  <snoop@email.it> wrote:
> >> On Mon, Dec 20, 2010 at 6:23 PM,  <snoop@email.it> wrote:
> > Well, I'd prefer a big single storage instead of "too many spinning
disks"
> > around mainly for maintenance reasons, to avoid replication and too much
> > network traffic.
>
> Sure, but that then makes your single point of failure your storage
> system.  With no replication the best you can hope for if the storage
> array fails completely is to revert to a recent backup, since without
> any streaming replication you'll be missing tons of data if you've got
> many updates.  If your database is mostly static or the changes can be
> recreated from other sources that's not so bad.  However, if you have
> one storage array and it has a carastrophic hardware failure and goes
> down and stay down, then you'll need something else to hold the db
> while you wait for parts etc.

I'd use a filesystem replication solution like DRBD to avoid one disk array
being a single point of failure.

>
> >> > - I don't like the idea of having fixed size (16 megs regardless of
the
> >> > committed transaction number!) WAL logs often "shipped" from one node
to
> >> > another endangering my network performance (asynchronous replication)
>
> (P.s. that was pre 9.0 PITR...)
>
> >> Streaming replication in 9.0 doesn't really work that way, so you
> >> could use that now with a hot standby ready to be failed over to as
> >> needed.
> >
> > Mmm, so I can use an hot standby setup without any need for replication
> > (same data dir) and no need for STONITH?
> > Sorry if my questions sound trivial to you but my experience with
PostgreSQL
> > is quite limited and this would be my first "more complex" configuration
and
> > I'm trying to figure out the best way to go. Unfortunately it's not that
> > easy to figure it out going through the documentation only.
>
> No no.  With streaming replication the master streams changes to the
> slave(s) in real time, not via copying WAL files like in previous PITR
> replication.  No need for STONITH and / or fencing since the master
> database writes to the slave database.  Failover would be provided by
> whatever monitoring script you want to throw at the system, maybe with
> pgpool, pgbouncer, or even CARP if you wanted (I'm no fan of carp, had
> a lot of problems with it and CISCO switches a while back).  Cut off
> the main server,  put slave server into recovery, when recovery
> finishes and it's up and running, change the IP in the app config,
> bounce app and keep going.  With slony you'd do something similar, but
> use slonik commands to promote the first slave to the master where
> you'd bring up the slave in recovery mode in the streaming replication
> method.
>
> All of that assumes two machines with their own storage. (technically
> you could put it all on the same big array in different directories).
>
> If you want to share the same data dir then you HAVE to make sure that
> only one machine at a time can ever open that directory and start
> postgresql there.  Two postmasters on the same data dir are instant
> death.

OK, I still have to study a lot this technology before prior to keep on but
now I know where to look and how.
Thank you very much for your time! I really appreciate that.
 --
 Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP
autenticato? GRATIS solo con Email.it: http://www.email.it/f

 Sponsor:
 Paghe e stipendi, consulenza e collocamento, tutto con Emailpaghe! Provalo
gratuitamente fino al 31/12/2010
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=10682&d=20101221



Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Achilleas Mantzios
Дата:
Στις Tuesday 21 December 2010 03:23:25 ο/η snoop@email.it έγραψε:
> Hi everybody,
> I'm trying to figure out a way to setup a PostgreSQL HA cluster solution.
>
> I've done some research and I've an idea of different possible solutions,
> but I'd honestly like to implement it using CARP in a "Shared Disk Failover"
> fashion.

This reminds me of the serialization stack on shared disks (DASD) in the MVS IBM oper system.
It takes a lot of work to do that on OS level. Its smth beyond multiprocessing and high availability.
Think about it. In FreeBSD (or any of-the-shelf Unix), there is no inherent way to implicitly lock files.
Even on the same machine, if two users/processes modify the same file, the one who saves (closes)
the file last, generally wins. In order to do that, you will have to use explicit locking done by the application.
The OS by itself does not do that by default.
Things get tougher in a networked environment since file serialization should be applied at network-level.
This combination of data sharing (the idea of decoupling the concept of redundant hardware
from the concept of redundant disks) with the characteristics of parallel computing was realized in
IBM's Parallel Sysplex technology: http://en.wikipedia.org/wiki/IBM_Parallel_Sysplex
On top of it, someone could have CICS, DB2,etc... almost any available application.
It would be interesting to know if there is some concept close to it in the open source-Unix world.

--
Achilleas Mantzios

Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Snoop
Дата:
After some enlightening considerations by Scott Marlowe I'm actually now
considering a streaming replication with dedicated network connection
and two different data dirs on the same storage. This would bypass the
lock issue, make replication very fast and make my life "easier" with
one reliable (replicated) data storage.
Cannot say if this is the best solution but I'm studying ... trying to
deny myself. :)

Thanks for your reply Achilleas.

On Tue, 2010-12-21 at 10:52 +0300, Achilleas Mantzios wrote:
> Στις Tuesday 21 December 2010 03:23:25 ο/η snoop@email.it έγραψε:
> > Hi everybody,
> > I'm trying to figure out a way to setup a PostgreSQL HA cluster solution.
> >
> > I've done some research and I've an idea of different possible solutions,
> > but I'd honestly like to implement it using CARP in a "Shared Disk Failover"
> > fashion.
>
> This reminds me of the serialization stack on shared disks (DASD) in the MVS IBM oper system.
> It takes a lot of work to do that on OS level. Its smth beyond multiprocessing and high availability.
> Think about it. In FreeBSD (or any of-the-shelf Unix), there is no inherent way to implicitly lock files.
> Even on the same machine, if two users/processes modify the same file, the one who saves (closes)
> the file last, generally wins. In order to do that, you will have to use explicit locking done by the application.
> The OS by itself does not do that by default.
> Things get tougher in a networked environment since file serialization should be applied at network-level.
> This combination of data sharing (the idea of decoupling the concept of redundant hardware
> from the concept of redundant disks) with the characteristics of parallel computing was realized in
> IBM's Parallel Sysplex technology: http://en.wikipedia.org/wiki/IBM_Parallel_Sysplex
> On top of it, someone could have CICS, DB2,etc... almost any available application.
> It would be interesting to know if there is some concept close to it in the open source-Unix world.
>
> --
> Achilleas Mantzios
>




 --
 Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP autenticato? GRATIS solo con Email.it
http://www.email.it/f

 Sponsor:
 Riccione Capodanno Low Cost: hotel 3 stelle sup centralissimo, zona relax, pacchetto mezza pensione con bevande,
bottigliaper festeggiare. 2gg euro 165,00 a persona 
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=11205&d=22-12

Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Snoop
Дата:
It's hard to say Robin, I'm still in testing.
At the beginning I'd say very low ... probably something between 2000
and 5000 transactions a day (even harder to imagine how this load it's
gonna be distributed within 24 hours)?!

Thanks for your reply.

On Tue, 2010-12-21 at 08:02 +0000, robin wrote:
> How much data and what sort of write rates do you actually expect?
>
> Cheers,
> Robin
>
> On Tue, 21 Dec 2010 02:23:25 +0100, snoop@email.it wrote:
> > Hi everybody,
> > I'm trying to figure out a way to setup a PostgreSQL HA cluster
> solution.
> >
> > For the record:
> > - I've already tried pgpool-II 2.x in Synchronous Multi Master
> Replication
> > mode and I was satisfied by it's functionality but concerned about
> having
> > the same data growing on several nodes
> > - I've upgraded to pgpool-II 3.0.x on FreeBSD (from ports) but it's very
> > buggy at the moment
> > - I don't like the idea of having fixed size (16 megs regardless of the
> > committed transaction number!) WAL logs often "shipped" from one node to
> > another endangering my network performance (asynchronous replication)
> >
> > I've done some research and I've an idea of different possible
> solutions,
> > but I'd honestly like to implement it using CARP in a "Shared Disk
> > Failover"
> > fashion.
> > Unfortunately this doesn't really seem to be a common way according to
> the
> > very limited information available on the net and that's why I'm going
> to
> > ask here.
> >
> > My idea: two nodes (i386) with FreeBSD 8.1 and PostgreSQL 9.0.2, CARP
> > providing network failover and a shared data dir on a RAIDZ solution.
> I'm
> > pretty sure that CARP would do the job properly indirectly avoiding even
> > the
> > dangerous writing on the data dir from both nodes at the same time (that
> > would apparently badly screw up the DB) by redirecting any network
> > connection to the active DB and to him only.
> >
> > BUT ... I'm seriously concerned about the already active connections
> client
> > <-> server during the failover.
> > Example:
> >
> > client A connects to server A
> > server A fails so does the client A connection
> > CARP redirects any upcoming connection to the DB to server B now
> > client A reconnects and is now operating on server B
> > THEN
> > server A comes back up
> > CARP now obviously redirects any new connection to the DB to server A
> again
> > client B connects to server A
> > what about the existing connection of the client A to the server B?
> there's
> > an existing connection state between client A and server B
> > now there's the chance that a transaction can be committed on the server
> B
> > while there's someone else operating on server A too!
> >
> > I understand that in a server that doesn't commit many transaction but
> is
> > mainly answering queries this could be a remote situation but it can
> > probably happen.
> > Please correct me if I'm wrong (as I really hope to be).
> >
> > Did anyone here try such a configuration by any chance?
> >
> > Many thanks in advance for your time.
> >  --
> >  Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e
> SMTP
> > autenticato? GRATIS solo con Email.it: http://www.email.it/f
> >
> >  Sponsor:
> >  Paghe e stipendi, consulenza e collocamento, tutto con Emailpaghe!
> Provalo
> > gratuitamente fino al 31/12/2010
> >  Clicca qui:
> http://adv.email.it/cgi-bin/foclick.cgi?mid=10682&d=20101221




 --
 Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP autenticato? GRATIS solo con Email.it
http://www.email.it/f

 Sponsor:
 MisterCupido.com crea i tuoi regali personalizzati ai prezzi pi� bassi del web... e questa settimana ci sono pi�
scontiche mai! 
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=11031&d=22-12

Re: PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

От
Snoop
Дата:
Well, under this point of view I feel a bit lucky now. Having built-in
streaming replication is a big advantage and gives me the chance to
avoid third party applications. Despite there are good solutions out
there I believe that less complexity is always better.
Plus, I would really like to avoid a STONITH device .... sounds an
expensive and pretty critical solution.
Of course everything has his advantages/disadvantages but I'd like to
build something reliable/not too much expensive and most important thing
scalable. In the future you never know. I wouldn't like to regret being
successful because of a very bad initial design. :)

Cheers.

On Wed, 2010-12-22 at 21:58 +0000, robin wrote:
> With such low volumes you can probably take your pick of technologies
> based on your other requirements/interests/desires.
>
> That said, I too would start with the built in streaming replication - we
> would have done that with our project, but it wasn't available when we
> started (a looooong time ago ;-)).
>
> Instead we used DRBD and Heartbeat, plus a STONITH device, to provide
> resilience against disk or other hardware failure.
>
> When the current hardware gets replaced, we'll almost certainly migrate to
> streaming replication as part of the migration to new hardware.
>
> Cheers,
> Robin
>
> On Wed, 22 Dec 2010 20:52:28 +0100, Snoop <snoop@email.it> wrote:
> > It's hard to say Robin, I'm still in testing.
> > At the beginning I'd say very low ... probably something between 2000
> > and 5000 transactions a day (even harder to imagine how this load it's
> > gonna be distributed within 24 hours)?!
> >
> > Thanks for your reply.
> >
> > On Tue, 2010-12-21 at 08:02 +0000, robin wrote:
> >> How much data and what sort of write rates do you actually expect?
> >>
> >> Cheers,
> >> Robin
> >>
> >> On Tue, 21 Dec 2010 02:23:25 +0100, snoop@email.it wrote:
> >> > Hi everybody,
> >> > I'm trying to figure out a way to setup a PostgreSQL HA cluster
> >> solution.
> >> >
> >> > For the record:
> >> > - I've already tried pgpool-II 2.x in Synchronous Multi Master
> >> Replication
> >> > mode and I was satisfied by it's functionality but concerned about
> >> having
> >> > the same data growing on several nodes
> >> > - I've upgraded to pgpool-II 3.0.x on FreeBSD (from ports) but it's
> >> > very
> >> > buggy at the moment
> >> > - I don't like the idea of having fixed size (16 megs regardless of
> the
> >> > committed transaction number!) WAL logs often "shipped" from one node
> >> > to
> >> > another endangering my network performance (asynchronous replication)
> >> >
> >> > I've done some research and I've an idea of different possible
> >> solutions,
> >> > but I'd honestly like to implement it using CARP in a "Shared Disk
> >> > Failover"
> >> > fashion.
> >> > Unfortunately this doesn't really seem to be a common way according
> to
> >> the
> >> > very limited information available on the net and that's why I'm
> going
> >> to
> >> > ask here.
> >> >
> >> > My idea: two nodes (i386) with FreeBSD 8.1 and PostgreSQL 9.0.2, CARP
> >> > providing network failover and a shared data dir on a RAIDZ solution.
> >> I'm
> >> > pretty sure that CARP would do the job properly indirectly avoiding
> >> > even
> >> > the
> >> > dangerous writing on the data dir from both nodes at the same time
> >> > (that
> >> > would apparently badly screw up the DB) by redirecting any network
> >> > connection to the active DB and to him only.
> >> >
> >> > BUT ... I'm seriously concerned about the already active connections
> >> client
> >> > <-> server during the failover.
> >> > Example:
> >> >
> >> > client A connects to server A
> >> > server A fails so does the client A connection
> >> > CARP redirects any upcoming connection to the DB to server B now
> >> > client A reconnects and is now operating on server B
> >> > THEN
> >> > server A comes back up
> >> > CARP now obviously redirects any new connection to the DB to server A
> >> again
> >> > client B connects to server A
> >> > what about the existing connection of the client A to the server B?
> >> there's
> >> > an existing connection state between client A and server B
> >> > now there's the chance that a transaction can be committed on the
> >> > server
> >> B
> >> > while there's someone else operating on server A too!
> >> >
> >> > I understand that in a server that doesn't commit many transaction
> but
> >> is
> >> > mainly answering queries this could be a remote situation but it can
> >> > probably happen.
> >> > Please correct me if I'm wrong (as I really hope to be).
> >> >
> >> > Did anyone here try such a configuration by any chance?
> >> >
> >> > Many thanks in advance for your time.
> >> >  --
> >> >  Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e
> >> SMTP
> >> > autenticato? GRATIS solo con Email.it: http://www.email.it/f
> >> >
> >> >  Sponsor:
> >> >  Paghe e stipendi, consulenza e collocamento, tutto con Emailpaghe!
> >> Provalo
> >> > gratuitamente fino al 31/12/2010
> >> >  Clicca qui:
> >> http://adv.email.it/cgi-bin/foclick.cgi?mid=10682&d=20101221
> >
> >
> >
> >
> >  --
> >  Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e
> SMTP
> >  autenticato? GRATIS solo con Email.it http://www.email.it/f
> >
> >  Sponsor:
> >  MisterCupido.com crea i tuoi regali personalizzati ai prezzi più bassi
> >  del web... e questa settimana ci sono più sconti che mai!
> >  Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=11031&d=22-12