Обсуждение: large objects,was: Restoring 8.0 db to 8.1

Поиск
Список
Период
Сортировка

large objects,was: Restoring 8.0 db to 8.1

От
"Harald Armin Massa"
Дата:
> Not likely to change in the future, no.  Slony uses triggers to manage the
> changed rows.  We can't fire triggers on large object events, so there's no
> way for Slony to know what happened.

that leads me to a question I often wanted to ask:

is there any reason to create NEW PostgreSQL databases using Large
Objects, now that there is bytea and TOAST? (besides of legacy needs)

as much as I read, they take special care in dump/restore; force the
use of some special APIs on creating, do not work with Slony ....

Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!

Re: large objects,was: Restoring 8.0 db to 8.1

От
"Scott Marlowe"
Дата:
On Jan 8, 2008 9:01 AM, Harald Armin Massa <haraldarminmassa@gmail.com> wrote:
> > Not likely to change in the future, no.  Slony uses triggers to manage the
> > changed rows.  We can't fire triggers on large object events, so there's no
> > way for Slony to know what happened.
>
> that leads me to a question I often wanted to ask:
>
> is there any reason to create NEW PostgreSQL databases using Large
> Objects, now that there is bytea and TOAST? (besides of legacy needs)
>
> as much as I read, they take special care in dump/restore; force the
> use of some special APIs on creating, do not work with Slony ....

The primary advantage of large objects is that you can read like byte
by byte, like a file.

Re: large objects,was: Restoring 8.0 db to 8.1

От
Erik Jones
Дата:
On Jan 8, 2008, at 9:13 AM, Scott Marlowe wrote:

> On Jan 8, 2008 9:01 AM, Harald Armin Massa
> <haraldarminmassa@gmail.com> wrote:
>>> Not likely to change in the future, no.  Slony uses triggers to
>>> manage the
>>> changed rows.  We can't fire triggers on large object events, so
>>> there's no
>>> way for Slony to know what happened.
>>
>> that leads me to a question I often wanted to ask:
>>
>> is there any reason to create NEW PostgreSQL databases using Large
>> Objects, now that there is bytea and TOAST? (besides of legacy needs)
>>
>> as much as I read, they take special care in dump/restore; force the
>> use of some special APIs on creating, do not work with Slony ....
>
> The primary advantage of large objects is that you can read like byte
> by byte, like a file.

Also, with bytea (and any other varying length data type) there is
still a limit of 1G via TOASTing.  Large Objects will get you up to
2G for one field.

Erik Jones

DBA | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com




Re: large objects,was: Restoring 8.0 db to 8.1

От
Chris Browne
Дата:
haraldarminmassa@gmail.com ("Harald Armin Massa") writes:
>> Not likely to change in the future, no.  Slony uses triggers to manage the
>> changed rows.  We can't fire triggers on large object events, so there's no
>> way for Slony to know what happened.
>
> that leads me to a question I often wanted to ask:
>
> is there any reason to create NEW PostgreSQL databases using Large
> Objects, now that there is bytea and TOAST? (besides of legacy needs)
>
> as much as I read, they take special care in dump/restore; force the
> use of some special APIs on creating, do not work with Slony ....

They are useful if you really need to be able to efficiently access
portions of large objects.

For instance, if you find that you frequently need to modify large
objects, in place, that should be much more efficient using the LOB
interface than it would be using a bytea column.

It ought to be a lot more efficient to lo_lseek() to a position,
lo_read() a few bytes, and lo_write() a few bytes than it is to pull
the entire 42MB object out, read off a fragment, and then alter the
tuple.

That being said, I generally prefer bytea because it doesn't force me
into using a pretty weird "captive interface" to access the data.

If I found myself needing to make wacky updates on a large object, I'd
wonder if it wouldn't be better to have it expressed as a set of
tuples so that I'd not have a large object in the first place...
--
(format nil "~S@~S" "cbbrowne" "linuxdatabases.info")
http://www3.sympatico.ca/cbbrowne/x.html
"...  They are  not ``end  users'' until  someone presupposes  them as
such, as witless cattle." -- <craig@onshore.com>