Обсуждение: streaming replication

Поиск
Список
Период
Сортировка

streaming replication

От
Karuna Karpe
Дата:
Hello,

         I am replicating master server to slave server using streaming replication. I want to know that, when my master server is failed and my slave server become master server with adding some additional data into database and after some time failed master server is up then how will i replicate that additional changes into my failed master server?????

can any one tell me that how to do this?


Regards,
karuna karpe.

Re: streaming replication

От
Vinay
Дата:
Dear Karuna,

Its one way replication streaming ie Master to Slave. Once failed master DB is up, you need to reconfigure the replication streaming. Before that you need to take the updated dump of slave db and load in master db.

Hope this information is useful.

Vinay

On Tue, Nov 1, 2011 at 5:10 PM, Karuna Karpe <karuna.karpe@os3infotech.com> wrote:
Hello,

         I am replicating master server to slave server using streaming replication. I want to know that, when my master server is failed and my slave server become master server with adding some additional data into database and after some time failed master server is up then how will i replicate that additional changes into my failed master server?????

can any one tell me that how to do this?


Regards,
karuna karpe.

Re: streaming replication

От
Karuna Karpe
Дата:
But I want huge amount of data in database. so it is take so much to take dump from slave db and load in master db.
for example :-

        I have one master db server with 20GB of data and two slave db server are replicated from master server.  when my master db server is fail, then one of slave server become new master server.  In new master server 50MB of data add into database.  After some time my failed(old) master back up, then I want to become my old master server as master and my new master server become slave(as previous setting). So please let me know that how replication only that 50MB data into old master database server????

Please give me solution for that...

Regards,
Karuna karpe.




On Tue, Nov 1, 2011 at 5:14 PM, Vinay <vinay.dhoom@gmail.com> wrote:
Dear Karuna,

Its one way replication streaming ie Master to Slave. Once failed master DB is up, you need to reconfigure the replication streaming. Before that you need to take the updated dump of slave db and load in master db.

Hope this information is useful.

Vinay


On Tue, Nov 1, 2011 at 5:10 PM, Karuna Karpe <karuna.karpe@os3infotech.com> wrote:
Hello,

         I am replicating master server to slave server using streaming replication. I want to know that, when my master server is failed and my slave server become master server with adding some additional data into database and after some time failed master server is up then how will i replicate that additional changes into my failed master server?????

can any one tell me that how to do this?


Regards,
karuna karpe.


Re: streaming replication

От
Fujii Masao
Дата:
On Wed, Nov 2, 2011 at 4:48 PM, Karuna Karpe
<karuna.karpe@os3infotech.com> wrote:
> But I want huge amount of data in database. so it is take so much to take
> dump from slave db and load in master db.
> for example :-
>         I have one master db server with 20GB of data and two slave db
> server are replicated from master server.  when my master db server is fail,
> then one of slave server become new master server.  In new master server
> 50MB of data add into database.  After some time my failed(old) master back
> up, then I want to become my old master server as master and my new master
> server become slave(as previous setting). So please let me know that how
> replication only that 50MB data into old master database server????

What about using rsync to take a base backup from new master and load it
onto old master? rsync can reduce the backup time by sending only differences
between those two servers.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Re: streaming replication

От
Alex Lai
Дата:
Fujii Masao wrote:
> On Wed, Nov 2, 2011 at 4:48 PM, Karuna Karpe
> <karuna.karpe@os3infotech.com> wrote:
>
>> But I want huge amount of data in database. so it is take so much to take
>> dump from slave db and load in master db.
>> for example :-
>>         I have one master db server with 20GB of data and two slave db
>> server are replicated from master server.  when my master db server is fail,
>> then one of slave server become new master server.  In new master server
>> 50MB of data add into database.  After some time my failed(old) master back
>> up, then I want to become my old master server as master and my new master
>> server become slave(as previous setting). So please let me know that how
>> replication only that 50MB data into old master database server????
>>
>
> What about using rsync to take a base backup from new master and load it
> onto old master? rsync can reduce the backup time by sending only differences
> between those two servers.
>
> Regards,
>
>
My postgres instance has two databases.  The pg_dump size is about 30GB
size.  Rsync the entire $PGDATA take about an hour to a empty
directory.  When I rsync the $PGDATA to the existing directory, it still
take 50 minutes.  It seems to me that rsync still spend most of the time
checking any changes even with very little changes.  Maybe I miss some
option when using rsync can speed up the update.

--
Best regards,


Alex Lai
alai@sesda2.com


Re: streaming replication

От
Scott Ribe
Дата:
On Nov 7, 2011, at 9:13 AM, Alex Lai wrote:

> Rsync the entire $PGDATA take about an hour to a empty directory.  When I rsync the $PGDATA to the existing
directory,it still take 50 minutes. 

1) How slow is your disk? (Rsync computer to computer across the network should actually be faster if they're not many
changes.)

2) Why is an hour to bring the old master up to date such a problem? Are you planning on failing over that frequently?

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: streaming replication

От
"Kevin Grittner"
Дата:
Alex Lai <alai@sesda2.com> wrote:
> Fujii Masao wrote:

>> What about using rsync to take a base backup from new master and
>> load it onto old master? rsync can reduce the backup time by
>> sending only differences between those two servers.

> My postgres instance has two databases.  The pg_dump size is about
> 30GB size.  Rsync the entire $PGDATA take about an hour to a empty
> directory.  When I rsync the $PGDATA to the existing directory, it
> still take 50 minutes.  It seems to me that rsync still spend most
> of the time checking any changes even with very little changes.
> Maybe I miss some option when using rsync can speed up the update.

If the bottleneck is the network, be sure that you are using a
daemon on the remote side; otherwise you do drag all the data over
the wire for any file which doesn't have an identical timestamp and
size.  An example of how to do that from the rsync man page:

rsync -av -e "ssh -l ssh-user" rsync-user@host::module /dest

This will try to identify matching portions of files and avoid
sending them over the wire.

-Kevin

Re: streaming replication

От
Scott Ribe
Дата:
On Nov 7, 2011, at 10:10 AM, Kevin Grittner wrote:

> If the bottleneck is the network, be sure that you are using a
> daemon on the remote side; otherwise you do drag all the data over
> the wire for any file which doesn't have an identical timestamp and
> size.  An example of how to do that from the rsync man page:
>
> rsync -av -e "ssh -l ssh-user" rsync-user@host::module /dest
>
> This will try to identify matching portions of files and avoid
> sending them over the wire.

??? The normal way of using will using rolling checksums rather than sending all the data over the network:

rsync -av rsync-user@host:/source /dest

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: streaming replication

От
"Kevin Grittner"
Дата:
Scott Ribe <scott_ribe@elevated-dev.com> wrote:

> ??? The normal way of using will using rolling checksums rather
> than sending all the data over the network:
>
> rsync -av rsync-user@host:/source /dest

Perhaps this is an unexpected learning opportunity for me.  If there
is no daemon running on the other end, what creates the remote
checksums?

-Kevin

Re: streaming replication

От
Scott Ribe
Дата:
On Nov 7, 2011, at 10:36 AM, Kevin Grittner wrote:

> Perhaps this is an unexpected learning opportunity for me.  If there
> is no daemon running on the other end, what creates the remote
> checksums?

rsync--it invokes rsync on the other end by default.

--
Scott Ribe
scott_ribe@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





Re: streaming replication

От
"Kevin Grittner"
Дата:
Scott Ribe <scott_ribe@elevated-dev.com> wrote:
> On Nov 7, 2011, at 10:36 AM, Kevin Grittner wrote:
>
>> Perhaps this is an unexpected learning opportunity for me.  If
>> there is no daemon running on the other end, what creates the
>> remote checksums?
>
> rsync--it invokes rsync on the other end by default.

Empirically confirmed.  I don't know how I got it into my head that
one of the daemon options is needed in order to start an rsync
instance on the remote side.

Thanks for straightening me out on that,

-Kevin

Re: streaming replication

От
senthilnathan
Дата:
Just check the following thread for more details:
http://postgresql.1045698.n5.nabble.com/Timeline-Conflict-td4657611.html

> We have system(Cluster) with Master replicating to 2 stand by servers.
>
> i.e
>
> M   |-------> S1
>
>      |-------> S2
>
> If master failed, we do a trigger file at S1 to take over as master. Now
> we
> need to re-point the standby S2 as slave for the new master (i.e S1)
>
> While trying to start standby S2,there is a conflict in timelines, since
> on
> recovery it generates a new line.
>
> Is there any way to solve this issue?
... [show rest of quote]

Basically you need to take a fresh backup from new master and restart
the standby
using it. But, if S1 and S2 share the archive, S1 is ahead of S2
(i.e., the replay location
of S1 is bigger than or equal to that of S2), and
recovery_target_timeline is set to
'latest' in S2's recovery.conf, you can skip taking a fresh backup
from new master.
In this case, you can re-point S2 as a standby just by changing
primary_conninfo in
S2's recovery.conf and restarting S2. When S2 restarts, S2 reads the
timeline history
file which was created by S1 at failover and adjust its timeline ID to
S1's. So timeline
conflict doesn't happen.

Regards,

--
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

--


--
View this message in context: http://postgresql.1045698.n5.nabble.com/streaming-replication-tp4954954p5032131.html
Sent from the PostgreSQL - admin mailing list archive at Nabble.com.