Bruce Momjian wrote:
>
> > > If most joins, comparisons are done on the 10% in the main table, so
> > > much the better.
> >
> > Yes, but how would you want to judge which varsize value to
> > put onto the "secondary" relation, and which one to keep in
> > the "primary" table for fast comparisions?
>
> There is only one place in heap_insert that checks for tuple size and
> returns an error if it exceeds block size. I recommend when we exceed
> that we scan the tuple, and find the largest varlena type that is
> supported for long relations, and set the long bit and copy the data
> into the long table. Keep going until the tuple is small enough, and if
> not, throw an error on tuple size exceeded. Also, prevent indexed
> columns from being made long.
And prevent indexes from being created later if fields in some recorde
are made long ?
Or would it be enogh here to give out a warning ?
Or should one try to re-pack these tuples ?
Or, for tables that have mosty 10-char fields bu an occasional 10K field
we could possibly approach the indexes as currently proposed for tables,
i.e. make the index's data part point to the same LONG relation ?
The latter would probably open another can of worms.
---------
Hannu