Обсуждение: Slony1 or DRBD for replication ?

Поиск
Список
Период
Сортировка

Slony1 or DRBD for replication ?

От
Pierre LEBRECH
Дата:
Hello,

I want to replicate my PostgreSQL database at an other location. The
distance between the two locations should be around 10 miles. The link
should be a fast ethernet dedicated link.

What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?

Thank you.

Re: Slony1 or DRBD for replication ?

От
"Shoaib Mir"
Дата:
SLONY should be the choice :)

On 4/14/06, Pierre LEBRECH <pierre.lebrech@laposte.net> wrote:
Hello,

I want to replicate my PostgreSQL database at an other location. The
distance between the two locations should be around 10 miles. The link
should be a fast ethernet dedicated link.

What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?

Thank you.

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Re: Slony1 or DRBD for replication ?

От
"Joshua D. Drake"
Дата:
On Fri, 2006-04-14 at 14:56 +0200, Pierre LEBRECH wrote:
> Hello,
>
> I want to replicate my PostgreSQL database at an other location. The
> distance between the two locations should be around 10 miles. The link
> should be a fast ethernet dedicated link.
>
> What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?

It depends on your needs.

If you want to be able to use the slave postgresql instance (reporting,
non replicated name spaces, materialized views etc...) Slony or Mammoth
Replicator.

If you want to also replicate users/groups, grant and revoke, Mammoth
Replicator.

If you just want a hot backup... DRBD.

Joshua D. Drake


>
> Thank you.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
>
--

            === The PostgreSQL Company: Command Prompt, Inc. ===
      Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
      Providing the most comprehensive  PostgreSQL solutions since 1997
                     http://www.commandprompt.com/





Re: Slony1 or DRBD for replication ?

От
Pierre LEBRECH
Дата:
Joshua D. Drake wrote:
> On Fri, 2006-04-14 at 14:56 +0200, Pierre LEBRECH wrote:
>
>>Hello,
>>
>>I want to replicate my PostgreSQL database at an other location. The
>>distance between the two locations should be around 10 miles. The link
>>should be a fast ethernet dedicated link.
>>
>>What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?
>
>
> It depends on your needs.
>
> If you want to be able to use the slave postgresql instance (reporting,
> non replicated name spaces, materialized views etc...) Slony or Mammoth
> Replicator.
>
> If you want to also replicate users/groups, grant and revoke, Mammoth
> Replicator.
>
> If you just want a hot backup... DRBD.
>
> Joshua D. Drake
>

The second location should be used in case of emergency. So, if my first
machine/system becomes unreachable for whatever reason, I want to be
able to switch very quickly to the other machine. Of course, the goal is
to have no loss of data. That is the context.

Furthermore, I have experience with DRBD (not on databases) and I do not
know if DRBD would be the best way to solve this replication problem.

Thanks for any suggestions and explanations.

PS : my database is actualy in production in a critical environment

>
>
>>Thank you.
>>
>>---------------------------(end of broadcast)---------------------------
>>TIP 9: In versions below 8.0, the planner will ignore your desire to
>>       choose an index scan if your joining column's datatypes do not
>>       match
>>


Re: Slony1 or DRBD for replication ?

От
Christopher Browne
Дата:
In the last exciting episode, pierre.lebrech@laposte.net (Pierre LEBRECH) wrote:
> Thanks for any suggestions and explanations.

A third possibility would be PITR, new in version 8, if the point is
to have recovery from big failure.  You'd periodically copy the whole
DB, and continually copy WAL files across the wire...

See the PG docs; there's a whole chapter on it...
--
output = ("cbbrowne" "@" "gmail.com")
http://linuxdatabases.info/info/spreadsheets.html
"It   can be   shown   that for any  nutty  theory,  beyond-the-fringe
political view or  strange religion there  exists  a proponent  on the
Net. The proof is left as an exercise for your kill-file."
-- Bertil Jonell

Re: Slony1 or DRBD for replication ?

От
"Jim C. Nasby"
Дата:
On Fri, Apr 14, 2006 at 07:42:29PM +0200, Pierre LEBRECH wrote:
> The second location should be used in case of emergency. So, if my first
> machine/system becomes unreachable for whatever reason, I want to be
> able to switch very quickly to the other machine. Of course, the goal is
> to have no loss of data. That is the context.
>
> Furthermore, I have experience with DRBD (not on databases) and I do not
> know if DRBD would be the best way to solve this replication problem.
>
> Thanks for any suggestions and explanations.
>
> PS : my database is actualy in production in a critical environment

I believe that Continuent currently has the only no-loss (ie:
syncronous) replication solution. DRBD might allow for this as well, if
it can be setup to not return from fsync until the data's been
replicated.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Howto: Using PITR recovery for standby replication

От
"Benjamin Krajmalnik"
Дата:

I am running PostgreSQL 8.1.3 on Windows.

The project itself is a real time data acquisition system, so it cannot be taken off-line for backups.

I have tried using pg_dump, but discovered that the backup was not a consistent backup. The application currently inserts about 1 million rows per day (will ramp to about 5 million when in full production).  All of the insertion of the data is controlledby a master stored procedure which inserts rows into a raw log, and dynamically aggregates data into anclliary tables which enable us to see statistical data of the systems being monitored withut having to mine the raw data.

The original prototype of this system was running onder MS SQL Server 2000, but once PostgreSQL 8.1 was released I decided to port it.  The biggest challenge which I have right now is to ensure that we can have data recovery in case of a catastrophic failure in the primary system - with the ability to load a "cold spare".

Back to the problem I faced when testing backups with pg_dump, it appears that the backup was not a consistent backup of the data.  For example, sequences which are used by some tables bo longer held the correct values (the tables now held higher values), and ths would indicate to me that the backup of an aggregate table may not match the underlying raw data which created it.

As such, my only option is to create a "hot backup" using PITR.  I would like to know if the following scenario would work:

A secondary server with the same version of PostgreSQL is loaded on a secondary server.  The PostgreSQL service on the second box would not be running.  I would issue a pg_start_backup.  I would then copy  the the database directory to the second box.  Issue a pg_stop_backup.  I would delete the WAL logs form the secondary box's pg_xlog.  I wuld then copy the archived WAL's as well as the current WAL to the secondary pg_xlog location.

In could then backup the snapshot from the secondatry box to a lesser media for archival purposes, and in the event of a problem, I would simply start the service on he secondary box.

Is this a workable solution?  Or, better yet, could th secondary be ive and, after the initial backup and restore from the main box, could replication be acomplished by somehow moving the new archived logs to the secondary box, thereby creating a timed replication (forexample, every hour we cold create anothe backup ad just move the WAL file oer, since the state of the secondary database shoud reflect the state of the previous bacukp)?

While I absolutely love PotgreSQL, and together with some of the add-ons (pgAdmin, pgAgent, the add-ons from EMS) there is alost nothing missing, the relative difficulty of backing up / restoring vis-a-vis the commercial solutions is  frustrating.  Not hat it is a PostgreQL problem, but rather a learning curve, but until I get this working atisfactorily I am a bit worried.

As always, any insight and assistance ill be deeply appreciated.

 

Regards,

Benjamin

 

 

Re: Howto: Using PITR recovery for standby replication

От
Tom Lane
Дата:
"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> I have tried using pg_dump, but discovered that the backup was not a =
> consistent backup.

Really?

> Back to the problem I faced when testing backups with pg_dump, it =
> appears that the backup was not a consistent backup of the data.  For =
> example, sequences which are used by some tables bo longer held the =
> correct values (the tables now held higher values),

Sequences are non-transactional, so pg_dump might well capture a higher
value of the sequence counter than is reflected in any table row, but
there are numerous other ways by which a gap can appear in the set of
sequence values.  That's not a bug.  If you've got real discrepancies
in pg_dump's output, a lot of us would like to know about 'em.

            regards, tom lane

Re: Howto: Using PITR recovery for standby replication

От
"Benjamin Krajmalnik"
Дата:
Tom,
 
First of all forgive me if I am totally incorrect - I may very well be:)  If so, believe me I will be a very happy camper since my concerns will be void.  My concern was raised when I backed up the server which was receiving production data, and I restored it in a developmen server.  The difference between both of them is tht the production server has a very high row insertion rate, while the development server has about 10 rows per minute inserted (just to enable us to check tht our real time aggregation code and graphical display routines are working properly).
 
After restoring, when we fired up the service responsible for record insertion, I began receiving the constraint violations on the columns controlled by the sequences.  The table had higher values in them than the sequences.  This raised a huge red flag for me.  My concern was that the aggreegated data tables may not reflect the data in the raw inserted tables - essentially, that they may be out of sync.
 
The particular table which was problematic (and for which I posted another message due to the unique constraint violation which I am seeing intermittently) is the one with the high insertion rate.  The sequence is currently being used to facilitate purginf of old records.  However, as I study and play more with PostgreSQL, I found the ability to partition a table.  Once I move to table partitioning, my problem of ourgin data past retention periods will be fixed.
 
My entire conecpt may have been incorrect and is based with my experiences with MS SQL Server whereby when I purged records based on the date, due to the large amounts of data huge transaction logs were created, and in some cases ended up using so much diskspace that the database imploded!  The workaroound which I created under SQL Server was to assign an identity field to each row, select the minimum value for the day to be purged, and then purge records 10,000 at a time within transactions.  This kept the transaction file small and the database from exploding dye to running out of disk space.
 
It is very possible that this may not have been an issue with PostgreSQL, but I could not take a chance, so I ported the methodology over.  The new architecture will have a table partition for each month (12 partitions).  Once the retention period of the given partition expires it will simply be truncated.
 
Sorry for the rambling, but, if I understand correctly from you, the only items which were out of synch were the sequences, but all of the tables would have maintained consistency relative to each other?  If so, once I get rid of the unnecessary sequences, I can create a small function to be run after a restore which can reset the sequences to the proper value.   That would be simple enough, and would provide an easily implemented solution.
 
You'll probably see me in here asking lots of questins as I cut my teeth on PostgreSQL.  Hopefully, at some point in the future I will be able to contribute back with solutions :)
 
Once again, thank you.   Also, did you receive the snippet of the stored procedure which I sent you?  As I mentioned, the only place where row insertion is performed is via that stored procedure, and the sequences were created by defining the columns as "bigserial", which still has me puzzled as to why I am experiencing the contraing violation on the unique primary key.
 
Regards,
 
Benjamin 


From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Thu 4/20/2006 9:09 PM
To: Benjamin Krajmalnik
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication

"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> I have tried using pg_dump, but discovered that the backup was not a =
> consistent backup.

Really?

> Back to the problem I faced when testing backups with pg_dump, it =
> appears that the backup was not a consistent backup of the data.  For =
> example, sequences which are used by some tables bo longer held the =
> correct values (the tables now held higher values),

Sequences are non-transactional, so pg_dump might well capture a higher
value of the sequence counter than is reflected in any table row, but
there are numerous other ways by which a gap can appear in the set of
sequence values.  That's not a bug.  If you've got real discrepancies
in pg_dump's output, a lot of us would like to know about 'em.

                        regards, tom lane

Re: Howto: Using PITR recovery for standby replication

От
Alvaro Herrera
Дата:
Benjamin Krajmalnik wrote:

> The particular table which was problematic (and for which I posted
> another message due to the unique constraint violation which I am
> seeing intermittently) is the one with the high insertion rate.  The
> sequence is currently being used to facilitate purginf of old records.

How are you creating the dumps of the sequence and the table?  If you do
both separately (as in two pg_dump invocations with a -t switch each),
that could explain your problem.  This shouldn't really happen however,
because the sequence dump should be emitted in a dump of the table, if
the field is really of SERIAL or BIGSERIAL type.  However I don't see
any other way which would make the sequence go out of sync.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Howto: Using PITR recovery for standby replication

От
"Benjamin Krajmalnik"
Дата:
Alvaro,
 
I am a newbie, so I essentially invoked pg_dump from with pgAdmin3, with the defaults (including large objects).
This is the command being issued:
 
.C:\Program Files\PostgreSQL\8.1\bin\pg_dump.exe -i -h 172.20.0.32 -p 5432 -U postgres -F c -b -v -f "C:\Documents and Settings\administrator.MS\testbk.backup" events
 
What I assumed was happening (and I may have very well been wrong) was that I was getting a consistent backup of the object at the time that it was processed, but not the database as a whole.
 


From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Thu 4/20/2006 10:02 PM
To: Benjamin Krajmalnik
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication

Benjamin Krajmalnik wrote:

> The particular table which was problematic (and for which I posted
> another message due to the unique constraint violation which I am
> seeing intermittently) is the one with the high insertion rate.  The
> sequence is currently being used to facilitate purginf of old records.

How are you creating the dumps of the sequence and the table?  If you do
both separately (as in two pg_dump invocations with a -t switch each),
that could explain your problem.  This shouldn't really happen however,
because the sequence dump should be emitted in a dump of the table, if
the field is really of SERIAL or BIGSERIAL type.  However I don't see
any other way which would make the sequence go out of sync.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Howto: Using PITR recovery for standby replication

От
Alvaro Herrera
Дата:
Benjamin Krajmalnik wrote:

> I am a newbie, so I essentially invoked pg_dump from with pgAdmin3,
> with the defaults (including large objects).  This is the command
> being issued:
>
> .C:\Program Files\PostgreSQL\8.1\bin\pg_dump.exe -i -h 172.20.0.32 -p 5432 -U postgres -F c -b -v -f "C:\Documents
andSettings\administrator.MS\testbk.backup" events 
>
> What I assumed was happening (and I may have very well been wrong) was
> that I was getting a consistent backup of the object at the time that
> it was processed, but not the database as a whole.

This command should produce a consistent dump of all the objects in the
database.  (Not a consistent view of each object in isolation, which is
AFAIU what you are saying.)

Next question is, how are you restoring this dump?

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Howto: Using PITR recovery for standby replication

От
"Benjamin Krajmalnik"
Дата:
1.  Dropped database
2.  Recreated blank databas
3.  C:\Program Files\PostgreSQL\8.1\bin\pg_restore.exe -i -h 172.20.0.32 -p 5432 -U postgres -d events -v "C:\Documents and Settings\administrator.MS\testbk.backup"
pg_restore: connecting to database for restore
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
pg_restore: creating TABLE appointments
pg_restore: executing SEQUENCE SET appointments_id_seq
pg_restore: restoring data for table "appointments"
pg_restore: setting owner and privileges for SCHEMA public
pg_restore: setting owner and privileges for COMMENT SCHEMA public
pg_restore: setting owner and privileges for ACL public
pg_restore: setting owner and privileges for PROCEDURAL LANGUAGE plpgsql
pg_restore: setting owner and privileges for TABLE appointments
Process returned exit code 0.
(the above is the result of a sample restore - the database is a simple database with very few records, and was backed up when no activity was taking place against it, unlike our production database).
 
Maybe I am using the wrong switches altogether to accomplish my end results.
 
Thanks a million for your willingness to point me in the right direction.


From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Thu 4/20/2006 10:35 PM
To: Benjamin Krajmalnik
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication

Benjamin Krajmalnik wrote:

> I am a newbie, so I essentially invoked pg_dump from with pgAdmin3,
> with the defaults (including large objects).  This is the command
> being issued:

> .C:\Program Files\PostgreSQL\8.1\bin\pg_dump.exe -i -h 172.20.0.32 -p 5432 -U postgres -F c -b -v -f "C:\Documents and Settings\administrator.MS\testbk.backup" events

> What I assumed was happening (and I may have very well been wrong) was
> that I was getting a consistent backup of the object at the time that
> it was processed, but not the database as a whole.

This command should produce a consistent dump of all the objects in the
database.  (Not a consistent view of each object in isolation, which is
AFAIU what you are saying.)

Next question is, how are you restoring this dump?

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: Howto: Using PITR recovery for standby replication

От
"Benjamin Krajmalnik"
Дата:
Tom,
 
Just wanted to let you know that I found the problem with the constraint violation on insert.
When I restored from a backup, I forgot to update the sequences current values to thje macval of the associated tables.
The problem had nothing to do with SP call sequencing, but rather with the sequence assigning a value which was already in use (and was way below the current calue in the table).
 
After your message concerning the sequence values the light bulb came on in my head and I went to check those.
Now I can go to sleep wothout worrying :)
 
Thanks for all your help!


From: pgsql-admin-owner@postgresql.org on behalf of Benjamin Krajmalnik
Sent: Thu 4/20/2006 9:52 PM
To: Tom Lane
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication

Tom,
 
 
Once again, thank you.   Also, did you receive the snippet of the stored procedure which I sent you?  As I mentioned, the only place where row insertion is performed is via that stored procedure, and the sequences were created by defining the columns as "bigserial", which still has me puzzled as to why I am experiencing the contraing violation on the unique primary key.
 
Regards,
 
Benjamin 


From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Thu 4/20/2006 9:09 PM
To: Benjamin Krajmalnik
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication

"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> I have tried using pg_dump, but discovered that the backup was not a =
> consistent backup.

Really?

> Back to the problem I faced when testing backups with pg_dump, it =
> appears that the backup was not a consistent backup of the data.  For =
> example, sequences which are used by some tables bo longer held the =
> correct values (the tables now held higher values),

Sequences are non-transactional, so pg_dump might well capture a higher
value of the sequence counter than is reflected in any table row, but
there are numerous other ways by which a gap can appear in the set of
sequence values.  That's not a bug.  If you've got real discrepancies
in pg_dump's output, a lot of us would like to know about 'em.

                        regards, tom lane