On Fri, 9 Jul 1999, Vadim Mikheev wrote:
>
> Bruce Momjian wrote:
> >
> > > Bruce Momjian wrote:
> > > >
> > > > If we get wide tuples, we could just throw all large objects into one
> > > > table, and have an on it. We can then vacuum it to compact space, etc.
> > >
> > > Storing 2Gb LO in table is not good thing.
> > >
> > > Vadim
> > >
> >
> > Ah, but we have segemented tables now. It will auto-split at 1 gig.
>
> Well, now consider update of 2Gb row!
> I worry not due to non-overwriting but about writing
> 2Gb log record to WAL - we'll not be able to do it, sure.
What I'm kinda curious about is *why* you would want to store a LO in the
table in the first place? And, consequently, as Bruce had
suggested...index it? Unless something has changed recently that I
totally missed, the only time the index would be used is if a query was
based on a) start of string (ie. ^<string>) or b) complete string (ie.
^<string>$) ...
So what benefit would an index be on a LO?
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org