Обсуждение: Question on moving data to new partitions

Поиск
Список
Период
Сортировка

Question on moving data to new partitions

От
"Benjamin Krajmalnik"
Дата:

I have some tables which have an extremely high amount of update activity on them.  I have changed autovacuum parameters (cost delay and limit), and whereas before they would never be vacuumed and bloat they are running fine.

However, as the platform scales, I am afraid I will reach the same situation.

As a result, I have decided to partition the table and add to each record a partition id, which can be used to route it to the correct partition.

Presently, all of the records reside on what will ultimately become the parent partition.

What would be the best way of moving the data to the pertinent partitions?

I was thinking of copying the data to another table and then performing a insert into partitionedtableparent select * from temporary table, and then performing a delete from only partitionedtableparent.

Does this sound like a reasonable way of doing this?  Is there a more efficient way of doing this?

 

 

Re: Question on moving data to new partitions

От
Scott Marlowe
Дата:
On Wed, Jan 13, 2010 at 5:51 PM, Benjamin Krajmalnik <kraj@illumen.com> wrote:
> I have some tables which have an extremely high amount of update activity on
> them.  I have changed autovacuum parameters (cost delay and limit), and
> whereas before they would never be vacuumed and bloat they are running fine.
>
> However, as the platform scales, I am afraid I will reach the same
> situation.
>
> As a result, I have decided to partition the table and add to each record a
> partition id, which can be used to route it to the correct partition.
>
> Presently, all of the records reside on what will ultimately become the
> parent partition.

Are you using table inheritance to do this?  or are they all independent tables?

> What would be the best way of moving the data to the pertinent partitions?
>
> I was thinking of copying the data to another table and then performing a
> insert into partitionedtableparent select * from temporary table, and then
> performing a delete from only partitionedtableparent.
>
> Does this sound like a reasonable way of doing this?  Is there a more
> efficient way of doing this?

You can probably skip a few steps there if you copy straight to the
destination table.

At work, where we have partitioned out some tables, I made a trigger
based inherited table setup, and basically did something like:

insert into master_table select * from master_table where id between 1
and 100000;
delete from only master_table where id between 1 and 100000;

Then incremented the between values until all the tuples had been moved, then I

truncate only master_table;

and it worked like a charm.

Re: Question on moving data to new partitions

От
"Benjamin Krajmalnik"
Дата:
Yes, I will be using table inheritance and inheriting the current table where the data resides.
I was wondering if it would be "kosher" to perform the insert on itself, but I guess since the rules engine takes over
thereshould not be a problem. 
The tables are not huge per se (a little over 50K records).  The problem is that each record gets updated at least 500
timesper day, so the row versions are quite extensive and need to be vacuumed often.  Can't afford to take chances on
thetables bloating because, from experience, it will slow down the system and create a snowball effect where data
comingin gets backed up.  By keeping the number of records in each partition small, I can ensure that autovacuum will
alwaysbe able to run.  As the need arises, I can always create additional partitions to accommodate for the growth. 

As always, thank you very much Scott.  You are always very helpful.



> -----Original Message-----
> From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
> Sent: Wednesday, January 13, 2010 5:58 PM
> To: Benjamin Krajmalnik
> Cc: pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Question on moving data to new partitions
>
> On Wed, Jan 13, 2010 at 5:51 PM, Benjamin Krajmalnik <kraj@illumen.com>
> wrote:
> > I have some tables which have an extremely high amount of update
> activity on
> > them.  I have changed autovacuum parameters (cost delay and limit),
> and
> > whereas before they would never be vacuumed and bloat they are
> running fine.
> >
> > However, as the platform scales, I am afraid I will reach the same
> > situation.
> >
> > As a result, I have decided to partition the table and add to each
> record a
> > partition id, which can be used to route it to the correct partition.
> >
> > Presently, all of the records reside on what will ultimately become
> the
> > parent partition.
>
> Are you using table inheritance to do this?  or are they all
> independent tables?
>
> > What would be the best way of moving the data to the pertinent
> partitions?
> >
> > I was thinking of copying the data to another table and then
> performing a
> > insert into partitionedtableparent select * from temporary table, and
> then
> > performing a delete from only partitionedtableparent.
> >
> > Does this sound like a reasonable way of doing this?  Is there a more
> > efficient way of doing this?
>
> You can probably skip a few steps there if you copy straight to the
> destination table.
>
> At work, where we have partitioned out some tables, I made a trigger
> based inherited table setup, and basically did something like:
>
> insert into master_table select * from master_table where id between 1
> and 100000;
> delete from only master_table where id between 1 and 100000;
>
> Then incremented the between values until all the tuples had been
> moved, then I
>
> truncate only master_table;
>
> and it worked like a charm.

Re: Question on moving data to new partitions

От
Scott Marlowe
Дата:
On Wed, Jan 13, 2010 at 6:11 PM, Benjamin Krajmalnik <kraj@illumen.com> wrote:
> Yes, I will be using table inheritance and inheriting the current table where the data resides.
> I was wondering if it would be "kosher" to perform the insert on itself, but I guess since the rules engine takes
overthere should not be a problem. 
> The tables are not huge per se (a little over 50K records).  The problem is that each record gets updated at least
500times per day, so the row versions are quite extensive and need to be vacuumed often.  Can't afford to take chances
onthe tables bloating because, from experience, it will slow down the system and create a snowball effect where data
comingin gets backed up.  By keeping the number of records in each partition small, I can ensure that autovacuum will
alwaysbe able to run.  As the need arises, I can always create additional partitions to accommodate for the growth. 
>
> As always, thank you very much Scott.  You are always very helpful.

My one recommendation would be to look at using triggers over rules.
I have a simple cronjob written in php that creates new partitions and
triggers each night at midnight.  Triggers are MUCH faster than rules
for partitioning, but making them fancy is a giant pain in plpgsql.  I
just write a big trigger with an if/elseif/else tree that handles each
situation.  It runs very fast.

Re: Question on moving data to new partitions

От
Radhika Sambamurti
Дата:

Scott Marlowe wrote:
> On Wed, Jan 13, 2010 at 6:11 PM, Benjamin Krajmalnik <kraj@illumen.com> wrote:
>
>> Yes, I will be using table inheritance and inheriting the current table where the data resides.
>> I was wondering if it would be "kosher" to perform the insert on itself, but I guess since the rules engine takes
overthere should not be a problem. 
>> The tables are not huge per se (a little over 50K records).  The problem is that each record gets updated at least
500times per day, so the row versions are quite extensive and need to be vacuumed often.  Can't afford to take chances
onthe tables bloating because, from experience, it will slow down the system and create a snowball effect where data
comingin gets backed up.  By keeping the number of records in each partition small, I can ensure that autovacuum will
alwaysbe able to run.  As the need arises, I can always create additional partitions to accommodate for the growth. 
>>
>> As always, thank you very much Scott.  You are always very helpful.
>>
>
> My one recommendation would be to look at using triggers over rules.
> I have a simple cronjob written in php that creates new partitions and
> triggers each night at midnight.  Triggers are MUCH faster than rules
> for partitioning, but making them fancy is a giant pain in plpgsql.  I
> just write a big trigger with an if/elseif/else tree that handles each
> situation.  It runs very fast.
>
>
Hi,
I am currently looking into partitioning a table of which 90% of the
lookups are for the prior week. It has about 9 million rows and
selects  are a bit slow, since  the table is joined to  two other
tables.  I am planning on doing a range partition ie each year starting
from 2005 will be its own partition. So the check constraints will be
year based. I have run tests and what I see is that the optimizer can
find the correct table when I search by year, but when I search by say
recid (PK), it does a seq scan on every single child table.
To have the optimizer recognize the recid, do I need to include that in
the check constraint?

2. When you say you wrote a trigger, was it instead of the insert rule?

This is pretty new stuff to me and any insight into this would be helpful.

Thanks,
Radhika


Re: Question on moving data to new partitions

От
Scott Marlowe
Дата:
On Wed, Jan 13, 2010 at 7:30 PM, Radhika Sambamurti <rs1@speakeasy.net> wrote:
>
> Hi,
> I am currently looking into partitioning a table of which 90% of the lookups
> are for the prior week. It has about 9 million rows and  selects  are a bit
> slow, since  the table is joined to  two other tables.  I am planning on
> doing a range partition ie each year starting from 2005 will be its own
> partition. So the check constraints will be year based. I have run tests and
> what I see is that the optimizer can find the correct table when I search by
> year, but when I search by say recid (PK), it does a seq scan on every
> single child table.

Do you have an index on each of the tables on recid?

> To have the optimizer recognize the recid, do I need to include that in the
> check constraint?

Not sure.  I'd have to test it.  I thought the query planner was smart
enough to tell if an index would be useful even if it had to hit it
for each table.

> 2. When you say you wrote a trigger, was it instead of the insert rule?

Yes.  using rules results in much worse insert performance than a
trigger.  Generally.  However, since a rule re-writes queries, if a
single query were to insert many thousands of rows, a rule might be
faster than a trigger, which fires for each row even if they all come
from the same query.

> This is pretty new stuff to me and any insight into this would be helpful.

As Cole Porter would say, "Experiment"...

Re: Question on moving data to new partitions

От
Dimitri Fontaine
Дата:
"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> As a result, I have decided to partition the table and add to each record a partition id, which can be used to route
itto the correct partition. 
>
> Presently, all of the records reside on what will ultimately become the parent partition.
>
> What would be the best way of moving the data to the pertinent
> partitions?

What I use to do is to rename the current table partition_201001, say,
with a CHECK constraint date < 2010-01-01. In case of date based ranges
of course.

Then create the new parent table, empty, and set up the inheritance and
trigger. Then create some more future child tables, and commit.

New data is being routed, old data all packed. You can reshuffle the
archive like table later on, if needed.

Also, as noted down-thread, avoid rules, prefer triggers. One of the
reasons is locking behavior, where drop partition when using rules will
lock against running queries against parent table.

Regards,
--
dim

PITR online backups Setup

От
Renato Oliveira
Дата:
Dear all,

I am trying to setup PITR for online backup and also to create a more robust setup, in case our primary Postgres dies.

I am testing this setup on Centos 5.4, Postgres version: 8.1.18 (not sure why Centos has this old version by default).

I have enabled PITR, with the following commands:
/var/lib/pgsql/data/postgresql.conf
archive_command = on
archive_command = 'cp -i %p /data/pgsql/archives/%f </dev/null'

I have created the folder: /data/pgsql/archives and changed ownership to postgres:postgres

I can see the archiving seems to be working:
Ls -ls /data/pgsql/archives
16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000000
16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000001
16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000002
16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000003
16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000004
    4 -rw------- 1 postgres postgres      247 Jan 13 12:41 000000010000000000000004.00C18E68.backup
16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000005
16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000006
16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000007
16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000008

I have done the base backup:
Psql
select pg_start_backup('Full Backup - Master');
tar -cvzf /var/lib/pgsql/data/potgresMASTER.tar /var/lib/pgsql/data/
psql
select pg_stop_backup();

Now I am trying to setup the secondary server, this is where I am stuck.

1 - I tried to setup rsync to ship these logs across to the remote server, but I can't get postgres to work with
authorized_keys
How you guys are doing this? NFS will not be an option.

WARM server setup
On the Standby Server
I have restored the base backup
Tar -zxvf potgresMASTER.tar under /var/lib/pgsql/data

I have heard of pg_standby but apparently I have to compile it against postgres/source, not sure how this would help
me,if anyone could help me explaining it to me, it would be really helpful. 

Then I need to create a /var/lib/pgsql/data/recovery.conf and add similar lines
restore_command = 'cp /data/pgsql/archives/%f %p'

Do I need to turn this "ON" somewhere, because there seems to be an inconsistency on information around:
Some people says I have to turn archive_mode = on, some says I have to use archive_command = on

Do I need to do similar for recovery.conf?

Once I have rsync shipping logs to the remote server, recovery.conf configured, is that all I need to do to consider it
completeand working? 
Are there any other aspect that I need to consider?

I would be very thankful too all of you for any helps.

Thank you very much for all your repplies

Best regards




Renato Oliveira

e-mail: renato.oliveira@grant.co.uk

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK








P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named
recipients(s).If you are not the named recipient please notify the sender immediately and do not disclose the contents
toanother person or take copies. 

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system.
WhilstGrant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept
liabilityfor any damage which you sustain as a result of software viruses. You should therefore carry out your own
viruschecks before opening the attachment(s). 

OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our
http://www.grant.co.uk/Support/openxml.html


Re: PITR online backups Setup

От
"Joshua D. Drake"
Дата:
On Thu, 14 Jan 2010 14:33:52 +0000, Renato Oliveira
<renato.oliveira@grant.co.uk> wrote:
> Dear all,
>
> I am trying to setup PITR for online backup and also to create a more
> robust setup, in case our primary Postgres dies.
>
> I am testing this setup on Centos 5.4, Postgres version: 8.1.18 (not
sure
> why Centos has this old version by default).

Get off 8.1.18, use www.pgsqlrpms.org.

You want AT LEAST 8.3.

Also, make your life even easier:

https://projects.commandprompt.com/public/pitrtools

Joshua D. Drake

--
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997

Re: PITR online backups Setup

От
Renato Oliveira
Дата:

Julio,

 

Thank you for your reply, I was wondering if anyone has encountered problems setting up PITR with 8.2.4?

Unfortunately our live system runs 8.2.4 and it is quite tricky to upgrade it right now.

 

By the way where can I find a how to use pitr-tools?

 

Thank you very much

 

Best regards

 

Renato

 

 
Renato Oliveira

e-mail: renato.oliveira@grant.co.uk
 
Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
www.grant.co.uk
 
Grant Instruments (Cambridge) Ltd
 
Company registered in England, registration number 658133
 
Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK
 
 

From: Julio Leyva [mailto:jcleyva@hotmail.com]
Sent: 14 January 2010 18:47
To: Renato Oliveira
Subject: RE: [ADMIN] PITR online backups Setup

 

You better update to postgresql 8.3
PITR works very well with this version

I remember trying to setup it with 8.1 and It did not work.


> From: renato.oliveira@grant.co.uk
> To: pgsql-admin@postgresql.org
> Date: Thu, 14 Jan 2010 14:33:52 +0000
> Subject: [ADMIN] PITR online backups Setup
>
> Dear all,
>
> I am trying to setup PITR for online backup and also to create a more robust setup, in case our primary Postgres dies.
>
> I am testing this setup on Centos 5.4, Postgres version: 8.1.18 (not sure why Centos has this old version by default).
>
> I have enabled PITR, with the following commands:
> /var/lib/pgsql/data/postgresql.conf
> archive_command = on
> archive_command = 'cp -i %p /data/pgsql/archives/%f </dev/null'
>
> I have created the folder: /data/pgsql/archives and changed ownership to postgres:postgres
>
> I can see the archiving seems to be working:
> Ls -ls /data/pgsql/archives
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000000
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000001
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000002
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 10:12 000000010000000000000003
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000004
> 4 -rw------- 1 postgres postgres 247 Jan 13 12:41 000000010000000000000004.00C18E68.backup
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000005
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000006
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000007
> 16404 -rw------- 1 postgres postgres 16777216 Jan 13 12:59 000000010000000000000008
>
> I have done the base backup:
> Psql
> select pg_start_backup('Full Backup - Master');
> tar -cvzf /var/lib/pgsql/data/potgresMASTER.tar /var/lib/pgsql/data/
> psql
> select pg_stop_backup();
>
> Now I am trying to setup the secondary server, this is where I am stuck.
>
> 1 - I tried to setup rsync to ship these logs across to the remote server, but I can't get postgres to work with authorized_keys
> How you guys are doing this? NFS will not be an option.
>
> WARM server setup
> On the Standby Server
> I have restored the base backup
> Tar -zxvf potgresMASTER.tar under /var/lib/pgsql/data
>
> I have heard of pg_standby but apparently I have to compile it against postgres/source, not sure how this would help me, if anyone could help me explaining it to me, it would be really helpful.
>
> Then I need to create a /var/lib/pgsql/data/recovery.conf and add similar lines
> restore_command = 'cp /data/pgsql/archives/%f %p'
>
> Do I need to turn this "ON" somewhere, because there seems to be an inconsistency on information around:
> Some people says I have to turn archive_mode = on, some says I have to use archive_command = on
>
> Do I need to do similar for recovery.conf?
>
> Once I have rsync shipping logs to the remote server, recovery.conf configured, is that all I need to do to consider it complete and working?
> Are there any other aspect that I need to consider?
>
> I would be very thankful too all of you for any helps.
>
> Thank you very much for all your repplies
>
> Best regards
>
>
>
>
> Renato Oliveira
>
> e-mail: renato.oliveira@grant.co.uk
>
> Tel: +44 (0)1763 260811
> Fax: +44 (0)1763 262410
> http://www.grant.co.uk/
>
> Grant Instruments (Cambridge) Ltd
>
> Company registered in England, registration number 658133
>
> Registered office address:
> 29 Station Road,
> Shepreth,
> CAMBS SG8 6GB
> UK
>
>
>
>
>
>
>
>
> P Please consider the environment before printing this email
> CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named recipients(s). If you are not the named recipient please notify the sender immediately and do not disclose the contents to another person or take copies.
>
> VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system. Whilst Grant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept liability for any damage which you sustain as a result of software viruses. You should therefore carry out your own virus checks before opening the attachment(s).
>
> OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our http://www.grant.co.uk/Support/openxml.html
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin

 
 

 

P Please consider the environment before printing this email

CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named recipients(s). If you are not the named recipient please notify the sender immediately and do not disclose the contents to another person or take copies.
 
VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system. Whilst Grant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept liability for any damage which you sustain as a result of software viruses. You should therefore carry out your own virus checks before opening the attachment(s).
 
OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our website

Re: PITR online backups Setup

От
"Joshua D. Drake"
Дата:
On Mon, 2010-01-18 at 13:41 +0000, Renato Oliveira wrote:
> Julio,
>

> Unfortunately our live system runs 8.2.4 and it is quite tricky to
> upgrade it right now.
>
>
>
> By the way where can I find a how to use pitr-tools?

https://projects.commandprompt.com/public/pitrtools

Joshua D. Drake



--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.

Re: PITR online backups Setup

От
"Joshua D. Drake"
Дата:
On Mon, 2010-01-18 at 13:41 +0000, Renato Oliveira wrote:
> Julio,
>

> Unfortunately our live system runs 8.2.4 and it is quite tricky to
> upgrade it right now.
>
>
>
> By the way where can I find a how to use pitr-tools?

https://projects.commandprompt.com/public/pitrtools

Joshua D. Drake



--
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564
Consulting, Training, Support, Custom Development, Engineering
Respect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.


PostgreSQL backup idea

От
Renato Oliveira
Дата:
Dear all,

I have been thinking about PostgreSQL backup for some time now.

I can't implement PITR right now on our live systems, for commercial reasons.

I have been backing up the server with PG_DUMP every two days, reason it takes some times more than 24 Hours to backup
thefull database. 

I have an idea not sure how workable it is:
1 - Backup live server pipe the content of pg_dump to psql and restore it to the second database server.
I have tested this with a small database on my test model and it works, not sure how long it will take though.

2 - I also thought, once I have backed up the full database, I am going to see if it is possible to check which tables
havechanged and only backup those to the remote server. 
Not sure if it is possible to figure out which tables have changed, is there log or some command which can tell me
whichtables have changed? 

3 - Another idea would be, to backup the full DB and then check when was the last update and from there do a backup and
restoreremotely. 
For example: Last update was at 13:00, so from 13:00 onwards I would copy all the records and restore on the remote
server.
Is it possible to find out what was the last minute which we had an update, and then backup only the records which were
updatedto the current time? 
If so, how would I go about to do that?

4 - Backup instead of time use transactionID
Do a full backup, mark what was the last transactionID to the minute of backup finish and then onwards to backups only
forthe updated transactionIDs. 
For example: Full backup finishes at 13:00, the last transactionID at 13:00 would be 00013, then from 13:01 onwards
backupthe updates, so on. 

I am not sure if some of these things are possible, these are only ideas and I would appreciate any input and help, in
eitherbuild it or destroying it. 

If anyone has a backup script which handles failure and emails out and would like to share, for me to study it, I would
verymuch appreciate it. 

If you need more details why, reasons etc, please email me and I will clarify.

I am trying to work around the problems I am facing currently.

Thank you very much.

Really appreciate any help and input

Best regards

Renato



Renato Oliveira

e-mail: renato.oliveira@grant.co.uk

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK








P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named
recipients(s).If you are not the named recipient please notify the sender immediately and do not disclose the contents
toanother person or take copies. 

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system.
WhilstGrant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept
liabilityfor any damage which you sustain as a result of software viruses. You should therefore carry out your own
viruschecks before opening the attachment(s). 

OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our
http://www.grant.co.uk/Support/openxml.html


Re: PostgreSQL backup idea

От
Jesper Krogh
Дата:
Renato Oliveira wrote:
> Dear all,
>
> I have been thinking about PostgreSQL backup for some time now.
>
> I can't implement PITR right now on our live systems, for commercial
> reasons.

I think you need to rethink this one...

> 4 - Backup instead of time use transactionID Do a full backup, mark
> what was the last transactionID to the minute of backup finish and
> then onwards to backups only for the updated transactionIDs. For
> example: Full backup finishes at 13:00, the last transactionID at
> 13:00 would be 00013, then from 13:01 onwards backup the updates, so
> on.

This is conceptually PITR.

--
Jesper