Обсуждение: Long term database archival

Поиск
Список
Период
Сортировка

Long term database archival

От
"Karl O. Pinc"
Дата:
Hi,

What is the best pg_dump format for long-term database
archival?  That is, what format is most likely to
be able to be restored into a future PostgreSQL
cluster.

Mostly, we're interested in dumps done with
--data-only, and have preferred the
default (-F c) format.  But this form is somewhat more
opaque than a plain text SQL dump, which is bound
to be supported forever "out of the box".
Should we want to restore a 20 year old backup
nobody's going to want to be messing around with
decoding a "custom" format dump if it does not
just load all by itself.

Is the answer different if we're dumping the
schema as well as the data?

Thanks.

Karl <kop@meme.com>
Free Software:  "You don't pay back, you pay forward."
                  -- Robert A. Heinlein


Re: Long term database archival

От
"Florian G. Pflug"
Дата:
Karl O. Pinc wrote:
> Hi,
>
> What is the best pg_dump format for long-term database
> archival?  That is, what format is most likely to
> be able to be restored into a future PostgreSQL
> cluster.
>
> Mostly, we're interested in dumps done with
> --data-only, and have preferred the
> default (-F c) format.  But this form is somewhat more
> opaque than a plain text SQL dump, which is bound
> to be supported forever "out of the box".
> Should we want to restore a 20 year old backup
> nobody's going to want to be messing around with
> decoding a "custom" format dump if it does not
> just load all by itself.
For schema dumps the custom format has advantages IMHO,
mainly because it adds flexibility. When creating text-formatted
dumps, you have to specify options like "--no-owner, ..."
at _dumping_ time, while custom-format dumps allow you to
specify them at _restoration_ time.

For data-dumps this is less relevant, since the amount of
available options is much smaller. But even there, restoring
with "insert-statements" as opposed to "copy from stdin" could
be usefull in some situations.

Anyway, 20 years is a _long_, _long_ time. If you _really_
need to keep your data that long, I'd suggest you create
text-only schema dumps, and text-only data dumps. The postgres
developers are very concerned about backward compatibility in
my experience, but probably _not_ for versions from 20 years ago ;-)

But since the probability of the need to restore your backup
in 6 months is _much_ larger than the one of needing to restore
it in 20 years, I'd create customer-format dumps too.
For the near future, they're the better choice IMHO.

> Is the answer different if we're dumping the
> schema as well as the data?
The above holds true for the schema as well as for the data.

greetings, Florian Pflug

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Florian G. Pflug wrote:
> Karl O. Pinc wrote:
[snip]
> Anyway, 20 years is a _long_, _long_ time. If you _really_ need
> to keep your data that long, I'd suggest you create text-only
> schema dumps, and text-only data dumps. The postgres developers
> are very concerned about backward compatibility in my experience,
> but probably _not_ for versions from 20 years ago ;-)

20 years seems pretty long, but SARBOX sets many data retention
requirements at 7 years.

In a similar vein, we are the back-office contractor for a major
toll-road consortium, and regularly get subpoenas for transaction
details as old as 5 years.

The hassle of having to go thru old tapes and extract dozens and
dozens of GB of data just to ultimately retrieve 40 records is the
hook I'm using to get PostgreSQL into our old-guard datacenter.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEraUIS9HxQb37XmcRAuz5AJ45f+daxvsF3tAr/d0cjklGj579kACfV5JH
6AYIuDWNwcytR3m4thqAnY8=
=DDJx
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Agent M
Дата:
Will postgresql be a viable database in 20 years? Will SQL be used
anywhere in 20 years? Are you sure 20 years is your ideal backup
duration?

Very few media even last 5 years. The good thing about open source and
open standards is that regardless of the answers to those questions,
there is no proprietary element to prevent you from accessing that
data- simply decide what it will be and update your backups along the
way. Whether such data will be relevant/ useful to anyone in 20 years
is a question you have to answer yourself. Good luck.

-M

On Jul 6, 2006, at 2:57 PM, Karl O. Pinc wrote:

> Hi,
>
> What is the best pg_dump format for long-term database
> archival?  That is, what format is most likely to
> be able to be restored into a future PostgreSQL
> cluster.
>
> Mostly, we're interested in dumps done with
> --data-only, and have preferred the
> default (-F c) format.  But this form is somewhat more
> opaque than a plain text SQL dump, which is bound
> to be supported forever "out of the box".
> Should we want to restore a 20 year old backup
> nobody's going to want to be messing around with
> decoding a "custom" format dump if it does not
> just load all by itself.
>
> Is the answer different if we're dumping the
> schema as well as the data?
>
> Thanks.

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
agentm@themactionfaction.com
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


Re: Long term database archival

От
"Karl O. Pinc"
Дата:
On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
> Karl O. Pinc wrote:
>> Hi,
>>
>> What is the best pg_dump format for long-term database
>> archival?  That is, what format is most likely to
>> be able to be restored into a future PostgreSQL
>> cluster.

> Anyway, 20 years is a _long_, _long_ time.

Yes, but our data goes back over 30 years now
and is never deleted, only added to, and I
recently had occasion to want to look at a
backup from 1994-ish.  So, yeah we probably do
really want backups for that long.  They
probably won't get used, but we'll feel better.

Thanks.

Karl <kop@meme.com>
Free Software:  "You don't pay back, you pay forward."
                  -- Robert A. Heinlein


Re: Long term database archival

От
Richard Broersma Jr
Дата:
> Will postgresql be a viable database in 20 years? Will SQL be used
> anywhere in 20 years? Are you sure 20 years is your ideal backup
> duration?
>
> Very few media even last 5 years. The good thing about open source and
> open standards is that regardless of the answers to those questions,
> there is no proprietary element to prevent you from accessing that
> data- simply decide what it will be and update your backups along the
> way. Whether such data will be relevant/ useful to anyone in 20 years
> is a question you have to answer yourself. Good luck.

I am not to sure of the relevance, but I periodically worked as a sub-contractor for an
Oil-producing Company in California.  They were carrying 35 years of data on an Alpha Server
running Ca-Ingres.  The really bad part is that hundreds and hundreds of reporting tables were
created on top of the functioning system for reporting over the years.  Now nobody know which
tables are relevant and with are redundant and or deprecated.

Also year after year, new custom text file reports were created with procedural scrips.  The load
on the server was such that the daily reporting was taking near taking 23 hours to complete.  And
the requests for new reports was getting the IT department very worried.

Worst of all know one there really know the ins and outs of Ingres to do anything about it.

Well, I take part of that back.  They recently upgrade to a newer alpha to reduce the time daily
reporting was taking. :-)

Regards,

Richard Broersma Jr.

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Agent M wrote:
> Will postgresql be a viable database in 20 years? Will SQL be used
> anywhere in 20 years? Are you sure 20 years is your ideal backup duration?

SQL was used 20 years ago, why not 20 years from now?

I can't see needing data from 10 years ago, but you never know.
Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
about what happened 150 years ago.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEraoCS9HxQb37XmcRAnUSAKCrq1JOjb4lqmesi31Ko8a9N3MjxgCg7X9B
1gl+H5TMia4PE6mFtSbAApE=
=UtVq
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Agent M
Дата:
> I am not to sure of the relevance, but I periodically worked as a
> sub-contractor for an
> Oil-producing Company in California.  They were carrying 35 years of
> data on an Alpha Server
> running Ca-Ingres.  The really bad part is that hundreds and hundreds
> of reporting tables were
> created on top of the functioning system for reporting over the years.
>  Now nobody know which
> tables are relevant and with are redundant and or deprecated.
>
> Also year after year, new custom text file reports were created with
> procedural scrips.  The load
> on the server was such that the daily reporting was taking near taking
> 23 hours to complete.  And
> the requests for new reports was getting the IT department very
> worried.

But the data from 35 years ago wasn't stored in Ingres and, if it's
important, it won't stay in Ingres. The data shifts from format to
format as technology progresses.

It seemed to me that the OP wanted some format that would be readable
in 20 years. No one can guarantee anything like that.

-M

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
agentm@themactionfaction.com
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬


Re: Long term database archival

От
Richard Broersma Jr
Дата:
> But the data from 35 years ago wasn't stored in Ingres and, if it's
> important, it won't stay in Ingres. The data shifts from format to
> format as technology progresses.
>
> It seemed to me that the OP wanted some format that would be readable
> in 20 years. No one can guarantee anything like that.

What you are saying could be true, but that wasn't what I was lead to believe.  This Database was
logging data from the production automation system.  I believe the need for 30+ years of data was
because the client was interested in determining / trending the gradual drop off in production
over the year.

Their interest is in extrapolating profitability lifetime for their facility. Essentially want to
know how long they have before they have to "close the doors."

But you are probably correct, I had no way of really knowing how old that data on there server
really was.

Regards,

Richard Broersma Jr.

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Agent M wrote:
[snip]
>
> But the data from 35 years ago wasn't stored in Ingres and, if
> it's important, it won't stay in Ingres. The data shifts from
> format to format as technology progresses.

Ingres has been around for longer than you think: about 20 years.

So, the data has been converted one time in 35 years.  Pretty damned
stable if you ask me.

Another example: the on-disk structure of RDB/VMS has remained
stable ever v1.0 in 1984.  That means that upgrading from
major-version to major version (even when new datatypes and index
structures have been added) is a quick, trivial process.

Companies with lots of important data like that.

> It seemed to me that the OP wanted some format that would be
> readable in 20 years. No one can guarantee anything like that.

ASCII will be here in 20 years.  So will EBCDIC.  As will UTF.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErbXoS9HxQb37XmcRAk8OAJ0bUv1kk7T0Q273jGkFVwy5TnHG9wCdFDI8
9ebDZyxwiIGwfmISbJpGWBs=
=DfBk
-----END PGP SIGNATURE-----

Re: Long term database archival

От
"Dann Corbit"
Дата:
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
> owner@postgresql.org] On Behalf Of Ron Johnson
> Sent: Thursday, July 06, 2006 5:26 PM
> To: Postgres general mailing list
> Subject: Re: [GENERAL] Long term database archival
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Agent M wrote:
> > Will postgresql be a viable database in 20 years? Will SQL be used
> > anywhere in 20 years? Are you sure 20 years is your ideal backup
> duration?
>
> SQL was used 20 years ago, why not 20 years from now?
>
> I can't see needing data from 10 years ago, but you never know.
> Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
> about what happened 150 years ago.

The company I work for does lots of business with OpenVMS systems
running RMS, Rdb, and DBMS and IBM Mainframes running VSAM, IMS, etc.
along with many other 'ancient' database systems.

We have customers with Rdb version 4.x (around 15 years old, IIRC) and
RMS and VSAM formats from the 1980s.

Suppose, for instance, that you run a sawmill.  The software for your
sawmill was written in 1985.  In 1991, you did a hardware upgrade to a
VAX 4100, but did not upgrade your Rdb version (since it was debugged
and performed adequately).

Your software can completely keep up with the demands of the sawmill.
It even runs payroll.  The workers got tired of the RS232 terminals and
so you did a client server upgrade using PCs as terminals in 1999, but
kept your VAX 4100 minicomputer running Rdb with no changes.  You
upgraded from Xentis to Crystal Reports in 2003, but using OLEDB drivers
means you did not have to touch anything on your server.

Sound far-fetched?  It's not uncommon in the least.  Furthermore, a
million dollar upgrade to a shiny new system and software might not
increase productivity at all.

It's the data that contains all the value.  The hardware becomes
obsolete when it can no longer keep up with business needs.

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dann Corbit wrote:
>> -----Original Message-----
>> From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-
>> owner@postgresql.org] On Behalf Of Ron Johnson
>> Sent: Thursday, July 06, 2006 5:26 PM
>> To: Postgres general mailing list
>> Subject: Re: [GENERAL] Long term database archival
>>
>>
>> Agent M wrote:
>>> Will postgresql be a viable database in 20 years? Will SQL be used
>>> anywhere in 20 years? Are you sure 20 years is your ideal backup
>> duration?
>>
>> SQL was used 20 years ago, why not 20 years from now?
>>
>> I can't see needing data from 10 years ago, but you never know.
>> Thank $DEITY for microfilm; otherwise, we'd not know a whole lot
>> about what happened 150 years ago.
>
> The company I work for does lots of business with OpenVMS systems
> running RMS, Rdb, and DBMS and IBM Mainframes running VSAM, IMS, etc.
> along with many other 'ancient' database systems.
>
> We have customers with Rdb version 4.x (around 15 years old, IIRC) and
> RMS and VSAM formats from the 1980s.

Wow, that *is* ancient.  Rdb 4.2 was 1993, though.  "Only" 13 years.

Snicker.

> Suppose, for instance, that you run a sawmill.  The software for your
> sawmill was written in 1985.  In 1991, you did a hardware upgrade to a
> VAX 4100, but did not upgrade your Rdb version (since it was debugged
> and performed adequately).
>
> Your software can completely keep up with the demands of the sawmill.
> It even runs payroll.  The workers got tired of the RS232 terminals and
> so you did a client server upgrade using PCs as terminals in 1999, but
> kept your VAX 4100 minicomputer running Rdb with no changes.  You
> upgraded from Xentis to Crystal Reports in 2003, but using OLEDB drivers
> means you did not have to touch anything on your server.
>
> Sound far-fetched?  It's not uncommon in the least.  Furthermore, a
> million dollar upgrade to a shiny new system and software might not
> increase productivity at all.
>
> It's the data that contains all the value.  The hardware becomes
> obsolete when it can no longer keep up with business needs.

DEC surely did build VAX h/w to last.  Much higher quality than the
cheapo industry standard stuff they use now.  And, IMO, VAX/VMS was
a heck of a lot more stable written in Bliss and Macro than
Alpha/VMS ported to C.

I'd be worried, though, about the disk drives, so would push for
migration to Charon-VAX running on an x86 server.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErcXiS9HxQb37XmcRAqRDAKC63yqdkw4DEk0rUGu0AQw3a9jIDQCfR+fn
gWsYc94OFgcJEAA8J8Bs7jc=
=gbgy
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Ben
Дата:

On Thu, 6 Jul 2006, Dann Corbit wrote:

> It's the data that contains all the value.  The hardware becomes
> obsolete when it can no longer keep up with business needs.


..... or can no longer be repaired. :)

Re: Long term database archival

От
Csaba Nagy
Дата:
On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
> Hi,
>
> What is the best pg_dump format for long-term database
> archival?  That is, what format is most likely to
> be able to be restored into a future PostgreSQL
> cluster.

> Should we want to restore a 20 year old backup
> nobody's going to want to be messing around with
> decoding a "custom" format dump if it does not
> just load all by itself.

Karl, I would say that if you really want data from 20 years ago, keep
it in the custom format, along with a set of the sources of postgres
which created the dump. then in 20 years when you'll need it, you'll
compile the sources and load the data in the original postgres
version... of course you might need to also keep an image of the current
OS and the hardware you're running on if you really want to be sure it
will work in 20 years :-)

Cheers,
Csaba.



Re: Long term database archival

От
Shane Ambler
Дата:
On 7/7/2006 17:49, "Csaba Nagy" <nagy@ecircle-ag.com> wrote:

> On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
>> Hi,
>>
>> What is the best pg_dump format for long-term database
>> archival?  That is, what format is most likely to
>> be able to be restored into a future PostgreSQL
>> cluster.
>
>> Should we want to restore a 20 year old backup
>> nobody's going to want to be messing around with
>> decoding a "custom" format dump if it does not
>> just load all by itself.
>
> Karl, I would say that if you really want data from 20 years ago, keep
> it in the custom format, along with a set of the sources of postgres
> which created the dump. then in 20 years when you'll need it, you'll
> compile the sources and load the data in the original postgres
> version... of course you might need to also keep an image of the current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)
>
> Cheers,
> Csaba.
>

Depending on the size of data (if it isn't too large) you could consider
creating a new database for archives, maybe even one for each year.

This can be on an old server or backup server instead  of the production
one.

Unless the data is too large you can  dump/restore the archive data to a new
pg version as you upgrade meaning the data will always be available and you
won't have any format issues when you want to retrieve the data.



Re: Long term database archival

От
Tino Wildenhain
Дата:
Csaba Nagy schrieb:
...
> Karl, I would say that if you really want data from 20 years ago, keep
> it in the custom format, along with a set of the sources of postgres
> which created the dump. then in 20 years when you'll need it, you'll
> compile the sources and load the data in the original postgres
> version... of course you might need to also keep an image of the current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)

No need - you will just emulate the whole hardware in 20 years ;-)

Regards
Tino

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ben wrote:
>
>
> On Thu, 6 Jul 2006, Dann Corbit wrote:
>
>> It's the data that contains all the value.  The hardware becomes
>> obsolete when it can no longer keep up with business needs.
>
>
> ..... or can no longer be repaired. :)

http://www.softresint.com/charon-vax/index.htm

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFErnGyS9HxQb37XmcRAjz0AKCvhP7k5quH+Ozdwa1Z35zvdYyuLACgu45B
tgCgyFmeOvyKp7jzZivpSdI=
=8CL1
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Richard Broersma Jr
Дата:
of course you might need to also keep an image of the current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)

I think that in twenty years, I think most of us will be more worried about our retirement than
the long terms data conserns of the companies we will no longer be working for. :-D

Of course, some of us that really enjoy what we do for work might prefer to "die with our work
boots on."

Regards,

Richard Broersma Jr.


Re: Long term database archival

От
Steve Atkins
Дата:
On Jul 7, 2006, at 1:19 AM, Csaba Nagy wrote:

> On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
>> Hi,
>>
>> What is the best pg_dump format for long-term database
>> archival?  That is, what format is most likely to
>> be able to be restored into a future PostgreSQL
>> cluster.
>
>> Should we want to restore a 20 year old backup
>> nobody's going to want to be messing around with
>> decoding a "custom" format dump if it does not
>> just load all by itself.
>
> Karl, I would say that if you really want data from 20 years ago, keep
> it in the custom format, along with a set of the sources of postgres
> which created the dump. then in 20 years when you'll need it, you'll
> compile the sources and load the data in the original postgres
> version... of course you might need to also keep an image of the
> current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)

I've been burned by someone doing that, and then being unable to
find a BCPL compiler.

So don't do that.

Store them in a nice, neutral ASCII format, along with all the
documentation. If you can't imagine extracting the
data with a small perl script and less than a days work today
then your successor will likely curse your name in 20 years
time.

Cheers,
   Steve


Re: Long term database archival

От
Karsten Hilbert
Дата:
On Fri, Jul 07, 2006 at 09:09:22AM -0700, Richard Broersma Jr wrote:

> I think that in twenty years, I think most of us will be more worried about our retirement than
> the long terms data conserns of the companies we will no longer be working for. :-D

You may want to take precautions now such that you start
getting *more* healthy towards retirement rather than less.

Because your old medical record cannot be accessed any longer.

Karsten
--
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

Getting Primary Key Value After Insert

От
"Adam"
Дата:
I'm inserting data into two tables, the second table has a forigen key that
points to the primary key of the first table.

After I insert a row into the first table, I need to take the primary key
value created in "SERIAL" column and store it so I can insert it as the
forigen key value on the second table.

What is the best way to get the value of the primary key ( SERIAL data
type ) of the row I inserted?


Re: Getting Primary Key Value After Insert

От
"Adam"
Дата:
I think I found the answer, you use the CURRVAL() function.

Just to cover all the bases, consider this scenario in chronological order:
1.  You insert data and the primary key is set to 20.
2.  Someone else inserts data and the next key is set to 21.
3.  If you call currval() will it return 20?  I would think it does.


----- Original Message -----
To: "Postgres general mailing list" <pgsql-general@postgresql.org>
Sent: Saturday, July 08, 2006 11:28 PM
Subject: [GENERAL] Getting Primary Key Value After Insert


> I'm inserting data into two tables, the second table has a forigen key
> that points to the primary key of the first table.
>
> After I insert a row into the first table, I need to take the primary key
> value created in "SERIAL" column and store it so I can insert it as the
> forigen key value on the second table.
>
> What is the best way to get the value of the primary key ( SERIAL data
> type ) of the row I inserted?
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>


Re: Getting Primary Key Value After Insert

От
Richard Broersma Jr
Дата:
> I think I found the answer, you use the CURRVAL() function.
>
> Just to cover all the bases, consider this scenario in chronological order:
> 1.  You insert data and the primary key is set to 20.
> 2.  Someone else inserts data and the next key is set to 21.
> 3.  If you call currval() will it return 20?  I would think it does.

My understanding is that it will provided your are within a transaction. The following thread
showed a variation way to accomplish what you want.
http://archives.postgresql.org/pgsql-sql/2006-07/msg00095.php

Regards,

Richard Broersma Jr.

Re: Getting Primary Key Value After Insert

От
Greg Stark
Дата:
Richard Broersma Jr <rabroersma@yahoo.com> writes:

> > 3.  If you call currval() will it return 20?  I would think it does.

Yes it does.

> My understanding is that it will provided your are within a transaction.

As long as you're in the same session you're fine. You would have to go out of
your way to break it but if you're using some sort of connection pooling you
wouldn't want to pull a fresh connection from the pool, for example.

--
greg

Re: Getting Primary Key Value After Insert

От
Richard Broersma Jr
Дата:
> > > 3.  If you call currval() will it return 20?  I would think it does.
>
> Yes it does.
>
> > My understanding is that it will provided your are within a transaction.
>
> As long as you're in the same session you're fine. You would have to go out of
> your way to break it but if you're using some sort of connection pooling you
> wouldn't want to pull a fresh connection from the pool, for example.

Just to clarify,  currval() is isolated by the session on not necessarily by a transaction?

Regards,

Richard Broersma Jr.

Re: Getting Primary Key Value After Insert

От
Douglas McNaught
Дата:
Richard Broersma Jr <rabroersma@yahoo.com> writes:

> Just to clarify, currval() is isolated by the session on not
> necessarily by a transaction?

Yes, this is spelled out quite clearly in the docs if you care to read
them.  :)

-Doug

Re: Getting Primary Key Value After Insert

От
Martijn van Oosterhout
Дата:
On Sat, Jul 08, 2006 at 11:02:26PM -0700, Richard Broersma Jr wrote:
> > > > 3.  If you call currval() will it return 20?  I would think it does.
> >
> > Yes it does.
> >
> > > My understanding is that it will provided your are within a transaction.
> >
> > As long as you're in the same session you're fine. You would have to go out of
> > your way to break it but if you're using some sort of connection pooling you
> > wouldn't want to pull a fresh connection from the pool, for example.
>
> Just to clarify,  currval() is isolated by the session on not necessarily by a transaction?

Yes, currval() returns the value given by the last nextval() on that
sequaence in the current session.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Вложения

Re: Getting Primary Key Value After Insert

От
"Nikolay Samokhvalov"
Дата:
As a alternative way, you may forget about sequence name and create
rule for you table - simple rule that will make 'SELECT idColumnName'
on every INSERT action.
After trying several approaches I've chosen this way in my projects.
It's better way if you have some framework (R2O layer or smth).

On 7/9/06, Adam <adam@spatialsystems.org> wrote:
> I'm inserting data into two tables, the second table has a forigen key that
> points to the primary key of the first table.
>
> After I insert a row into the first table, I need to take the primary key
> value created in "SERIAL" column and store it so I can insert it as the
> forigen key value on the second table.
>
> What is the best way to get the value of the primary key ( SERIAL data
> type ) of the row I inserted?
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster
>


--
Best regards,
Nikolay

Re: Long term database archival

От
"Karl O. Pinc"
Дата:
On 07/12/2006 09:25:45 AM, Jan Wieck wrote:
> On 7/6/2006 8:03 PM, Karl O. Pinc wrote:
>
>> On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
>>> Karl O. Pinc wrote:
>>>> Hi,
>>>>
>>>> What is the best pg_dump format for long-term database
>>>> archival?  That is, what format is most likely to
>>>> be able to be restored into a future PostgreSQL
>>>> cluster.
>>
>>> Anyway, 20 years is a _long_, _long_ time.

> The best way is to not only backup the data. With todays VM
> technology it should be easy enough to backup a virtual disk that
> contains a full OS and everything install for every major Postgres
> release.

I think that that's the answer, put the whole OS and db on a
bootable cd or DVD.  In 20 years they'll surely be no
problem running the whole thing from RAM so media access
speed should not be an issue.

The only thing to worry about is if the underlying architecture
goes away entirely.  I suppose if you choose something common
they'll be an emulator.

Karl <kop@meme.com>
Free Software:  "You don't pay back, you pay forward."
                  -- Robert A. Heinlein


Re: Long term database archival

От
Jan Wieck
Дата:
On 7/6/2006 8:03 PM, Karl O. Pinc wrote:

> On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
>> Karl O. Pinc wrote:
>>> Hi,
>>>
>>> What is the best pg_dump format for long-term database
>>> archival?  That is, what format is most likely to
>>> be able to be restored into a future PostgreSQL
>>> cluster.
>
>> Anyway, 20 years is a _long_, _long_ time.
>
> Yes, but our data goes back over 30 years now
> and is never deleted, only added to, and I
> recently had occasion to want to look at a
> backup from 1994-ish.  So, yeah we probably do
> really want backups for that long.  They
> probably won't get used, but we'll feel better.

The best way is to not only backup the data. With todays VM technology
it should be easy enough to backup a virtual disk that contains a full
OS and everything install for every major Postgres release. Note that
you would have troubles configuring and compiling a Postgres 4.2 these
days because you'd need to get some seriously old tools running first
(like bmake). And 4.2 is only what, 12 years old?

That way, you would be sure that you can actually load the data into the
right DB version.


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

Re: Long term database archival

От
Richard Broersma Jr
Дата:
> I think that that's the answer, put the whole OS and db on a
> bootable cd or DVD.  In 20 years they'll surely be no
> problem running the whole thing from RAM so media access
> speed should not be an issue.

You are correct.  I thought that CD only had a shelf life of 5 to 10 years.   This is true for new
unrecorded discs.  But once they are recorded they are expected to last 70 to 200 years.

http://www.osta.org/technology/cdqa13.htm

Regards,

Richard Broersma Jr.

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Karl O. Pinc wrote:
>
> On 07/12/2006 09:25:45 AM, Jan Wieck wrote:
>> On 7/6/2006 8:03 PM, Karl O. Pinc wrote:
>>
>>> On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
>>>> Karl O. Pinc wrote:
>>>>> Hi,
>>>>>
>>>>> What is the best pg_dump format for long-term database
>>>>> archival?  That is, what format is most likely to
>>>>> be able to be restored into a future PostgreSQL
>>>>> cluster.
>>>
>>>> Anyway, 20 years is a _long_, _long_ time.
>
>> The best way is to not only backup the data. With todays VM technology
>> it should be easy enough to backup a virtual disk that contains a full
>> OS and everything install for every major Postgres release.
>
> I think that that's the answer, put the whole OS and db on a
> bootable cd or DVD.

It's *highly* *unlikely* that a DVD-R will be readable in 20 years.

CSV files on DLT is a definite plan.

Also, and "archive server", with a simple CPU, not "much" RAM and
slots for 3.5" SATA drives.  Populate with with large, affordable
drives as needed.

>                      In 20 years they'll surely be no
> problem running the whole thing from RAM so media access
> speed should not be an issue.
>
> The only thing to worry about is if the underlying architecture
> goes away entirely.  I suppose if you choose something common
> they'll be an emulator.

That's how people are still running VAX/VMS...

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEtRxeS9HxQb37XmcRAuHcAKCDs+AUv90ZSg2IpXAAF4q9pg7fpACgqXkq
WXHDz9yMyIUs63RDTop9lR0=
=ZKDb
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Jan Wieck
Дата:
On 7/12/2006 12:18 PM, Tim Hart wrote:

> Wouldn't you run into driver problems if you tried to restore a 20 year old
> image? After all, you probably won't be using the same hardware in 20
> years...

I can't even find the same hardware I bought "last year". That's one of
the reasons why I use VMware on my laptop. It has a hardware abstraction
layer that presents default XVGA and Soundblaster cards etc. to the
guest OS. When I buy a new laptop, I just install VMware on the new
thing, copy over the virtual machines and fire them up. They don't even
notice that they run on entirely different hardware.


Jan

>
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Jan Wieck
> Sent: Wednesday, July 12, 2006 9:26 AM
> To: Karl O. Pinc
> Cc: Florian G. Pflug; pgsql-general@postgresql.org; thm@duke.edu
> Subject: Re: [GENERAL] Long term database archival
>
> On 7/6/2006 8:03 PM, Karl O. Pinc wrote:
>
>> On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
>>> Karl O. Pinc wrote:
>>>> Hi,
>>>>
>>>> What is the best pg_dump format for long-term database
>>>> archival?  That is, what format is most likely to
>>>> be able to be restored into a future PostgreSQL
>>>> cluster.
>>
>>> Anyway, 20 years is a _long_, _long_ time.
>>
>> Yes, but our data goes back over 30 years now
>> and is never deleted, only added to, and I
>> recently had occasion to want to look at a
>> backup from 1994-ish.  So, yeah we probably do
>> really want backups for that long.  They
>> probably won't get used, but we'll feel better.
>
> The best way is to not only backup the data. With todays VM technology
> it should be easy enough to backup a virtual disk that contains a full
> OS and everything install for every major Postgres release. Note that
> you would have troubles configuring and compiling a Postgres 4.2 these
> days because you'd need to get some seriously old tools running first
> (like bmake). And 4.2 is only what, 12 years old?
>
> That way, you would be sure that you can actually load the data into the
> right DB version.
>
>
> Jan
>


--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

Re: Long term database archival

От
"Joshua D. Drake"
Дата:
On Wednesday 12 July 2006 08:37, Richard Broersma Jr wrote:
> > I think that that's the answer, put the whole OS and db on a
> > bootable cd or DVD.  In 20 years they'll surely be no
> > problem running the whole thing from RAM so media access
> > speed should not be an issue.
>
> You are correct.  I thought that CD only had a shelf life of 5 to 10 years.
>   This is true for new unrecorded discs.  But once they are recorded they
> are expected to last 70 to 200 years.

And you can always make more then one copy... vacuum seal on, put it in the
freezer ;)

>
> http://www.osta.org/technology/cdqa13.htm
>
> Regards,
>
> Richard Broersma Jr.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
>                http://archives.postgresql.org

--
   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
             http://www.commandprompt.com/



Re: Long term database archival

От
"Tim Hart"
Дата:
Wouldn't you run into driver problems if you tried to restore a 20 year old
image? After all, you probably won't be using the same hardware in 20
years...

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Jan Wieck
Sent: Wednesday, July 12, 2006 9:26 AM
To: Karl O. Pinc
Cc: Florian G. Pflug; pgsql-general@postgresql.org; thm@duke.edu
Subject: Re: [GENERAL] Long term database archival

On 7/6/2006 8:03 PM, Karl O. Pinc wrote:

> On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote:
>> Karl O. Pinc wrote:
>>> Hi,
>>>
>>> What is the best pg_dump format for long-term database
>>> archival?  That is, what format is most likely to
>>> be able to be restored into a future PostgreSQL
>>> cluster.
>
>> Anyway, 20 years is a _long_, _long_ time.
>
> Yes, but our data goes back over 30 years now
> and is never deleted, only added to, and I
> recently had occasion to want to look at a
> backup from 1994-ish.  So, yeah we probably do
> really want backups for that long.  They
> probably won't get used, but we'll feel better.

The best way is to not only backup the data. With todays VM technology
it should be easy enough to backup a virtual disk that contains a full
OS and everything install for every major Postgres release. Note that
you would have troubles configuring and compiling a Postgres 4.2 these
days because you'd need to get some seriously old tools running first
(like bmake). And 4.2 is only what, 12 years old?

That way, you would be sure that you can actually load the data into the
right DB version.


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings



Re: Long term database archival

От
Greg Stark
Дата:
Jan Wieck <JanWieck@yahoo.com> writes:

> I can't even find the same hardware I bought "last year". That's one of the
> reasons why I use VMware on my laptop. It has a hardware abstraction layer that
> presents default XVGA and Soundblaster cards etc. to the guest OS. When I buy a
> new laptop, I just install VMware on the new thing, copy over the virtual
> machines and fire them up. They don't even notice that they run on entirely
> different hardware.

How does that help? You still need to get VMWare's host OS working with the
new hardware.

--
greg

Fwd: Long term database archival

От
"Marco Bizzarri"
Дата:
---------- Forwarded message ----------
From: Marco Bizzarri <marco.bizzarri@gmail.com>
Date: Jul 12, 2006 9:03 PM
Subject: Re: [GENERAL] Long term database archival
To: "Karl O. Pinc" <kop@meme.com>


Long term archival of electronic data is a BIG problem in the
archivist community. I remember, a few years ago, a paper describing
the problem of historical (20+ years old) data which were running the
risk of being lost simply because of lacking of proper hardware.

What I would suggest is to explore the problem trying to search first
with experience and research already done on the topic. The topic
itself is big, and it is not simply a matter of how you dumped the
data.

A little exploration in the archivist community could produce some
useful result for your problem.

Regards
Marco

On 7/6/06, Karl O. Pinc <kop@meme.com> wrote:
> Hi,
>
> What is the best pg_dump format for long-term database
> archival?  That is, what format is most likely to
> be able to be restored into a future PostgreSQL
> cluster.
>
> Mostly, we're interested in dumps done with
> --data-only, and have preferred the
> default (-F c) format.  But this form is somewhat more
> opaque than a plain text SQL dump, which is bound
> to be supported forever "out of the box".
> Should we want to restore a 20 year old backup
> nobody's going to want to be messing around with
> decoding a "custom" format dump if it does not
> just load all by itself.
>
> Is the answer different if we're dumping the
> schema as well as the data?
>
> Thanks.
>
> Karl <kop@meme.com>
> Free Software:  "You don't pay back, you pay forward."
>                   -- Robert A. Heinlein
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
>


--
Marco Bizzarri
http://notenotturne.blogspot.com/


--
Marco Bizzarri
http://notenotturne.blogspot.com/

Re: Fwd: Long term database archival

От
"Leif B. Kristensen"
Дата:
On Wednesday 12. July 2006 21:03, Marco Bizzarri wrote:
>
>Long term archival of electronic data is a BIG problem in the
>archivist community. I remember, a few years ago, a paper describing
>the problem of historical (20+ years old) data which were running the
>risk of being lost simply because of lacking of proper hardware.

I've been reading probate protocols from the 17th century in original,
which are still in perfect condition. It's quite a contrast to the
present discussion, talking about a few decades. It's very serious,
really. A few centuries from now, our age may appear as a black hole in
history, because of our monumental lack of care in the preservation of
contemporary data.
--
Leif Biberg Kristensen | Registered Linux User #338009
http://solumslekt.org/ | Cruising with Gentoo/KDE

Re: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tim Hart wrote:
> Wouldn't you run into driver problems if you tried to restore a
> 20 year old image? After all, you probably won't be using the
> same hardware in 20 years...

Scarily, the current PC architecture is just a set of add-ons and
extensions to that of the original IBM PC model 5150.

I'm sure that an 8-bit ISA SoundBlaster card from 1989 would still
work in Linux.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEtVCvS9HxQb37XmcRAnZdAJ4+mwzo4pbaQWn05y2dzYTKiboEDgCg0VUN
AWXLt1a9GTc6/yAlKnMZdiw=
=H4aV
-----END PGP SIGNATURE-----

Re: Fwd: Long term database archival

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Leif B. Kristensen wrote:
> On Wednesday 12. July 2006 21:03, Marco Bizzarri wrote:
>> Long term archival of electronic data is a BIG problem in the
>> archivist community. I remember, a few years ago, a paper
>> describing the problem of historical (20+ years old) data which
>> were running the risk of being lost simply because of lacking
>> of proper hardware.
>
> I've been reading probate protocols from the 17th century in
> original, which are still in perfect condition. It's quite a
> contrast to the present discussion, talking about a few decades.
> It's very serious, really. A few centuries from now, our age may
> appear as a black hole in history, because of our monumental lack
> of care in the preservation of contemporary data.

I wonder if the 9/11 Commission Report was printed on acid-free paper.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEtXBzS9HxQb37XmcRAupGAJ9fR8y85KCRfg6tXkijIn2VA2viNQCgv6Jt
tJWmGlEgH46ctno1Ghoj/Ow=
=OK8Z
-----END PGP SIGNATURE-----

Re: Long term database archival

От
Michelle Konzack
Дата:
Am 2006-07-06 19:25:38, schrieb Ron Johnson:

> SQL was used 20 years ago, why not 20 years from now?
>
> I can't see needing data from 10 years ago, but you never know.

I have a Database (currently around 370 GByte of historical data,
exactly the last 14600 years, but most from the last 100 years)
and I want to read the backups in 20 or 30 years too...

Same for the 3,4 TByte of binary data (original docs, mp3, wav, ...)

So I am running into the same trouble, finding a realy good backup
starategy.

Oh yes, since my MAIN table of the history was exploded with around
95% of the database size, I have split them up into 10 years... which
can easyly backuped as text/plain dump.  But now I have problems
doing searches with "tsearch2" in over 1000 tables.

Greetings
    Michelle Konzack
    Systemadministrator
    Tamay Dogan Network
    Debian GNU/Linux Consultant


--
Linux-User #280138 with the Linux Counter, http://counter.li.org/
##################### Debian GNU/Linux Consultant #####################
Michelle Konzack   Apt. 917                  ICQ #328449886
                   50, rue de Soultz         MSM LinuxMichi
0033/6/61925193    67100 Strasbourg/France   IRC #Debian (irc.icq.com)