Re: extensible external toast tuple support & snappy prototype

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: extensible external toast tuple support & snappy prototype
Дата
Msg-id 20130607143053.GJ29964@alap2.anarazel.de
обсуждение исходный текст
Ответ на Re: extensible external toast tuple support & snappy prototype  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: extensible external toast tuple support & snappy prototype  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On 2013-06-07 10:04:15 -0400, Robert Haas wrote:
> On Wed, Jun 5, 2013 at 11:01 AM, Andres Freund <andres@2ndquadrant.com> wrote:
> > On 2013-05-31 23:42:51 -0400, Robert Haas wrote:
> >> > This should allow for fairly easy development of a new compression
> >> > scheme for out-of-line toast tuples. It will *not* work for compressed
> >> > inline tuples (i.e. VARATT_4B_C). I am not convinced that that is a
> >> > problem or that if it is, that it cannot be solved separately.
> >
> >> Seems pretty sensible to me.  The patch is obviously WIP but the
> >> direction seems fine to me.
> >
> > So, I played a bit more with this, with an eye towards getting this into
> > a non WIP state, but: While I still think the method for providing
> > indirect external Datum support is fine, I don't think my sketch for
> > providing extensible compression is.
> 
> I didn't really care about doing (and don't really want to do) both
> things in the same patch.  I just didn't want the patch to shut the
> door to extensible compression in the future.

Oh. I don't want to actually commit it in the same patch either. But to
keep the road for extensible compression open we kinda need to know what
the way to do that is. Turns out it's an independent thing that doesn't
reuse any of the respective infrastructures.

I only went so far to actually implement the compression because a) my
previous thoughts about how it could work were bogus b) it was fun.

Turns out the benefits are imo big enough to make it worth pursuing
further.

> > 2) Do we want to build infrastructure for more than 3 compression
> > algorithms? We could delay that decision till we add the 3rd.
> 
> I think we should leave the door open, but I don't have a compelling
> desire to get too baroque for v1.  Still, maybe if the first byte has
> a 1 in the high-bit, the next 7 bits should be defined as specifying a
> compression algorithm.  3 compression algorithms would probably last
> us a while; but 127 should last us, in effect, forever.

The problem is that to discern from pglz on little endian the byte with
the two high bits unset is actually the fourth byte in a toast datum. So
we would need to store it in the 5th byte or invent some more
complicated encoding scheme.

So I think we should just define '00' as pglz, '01' as xxx, '10' as yyy
and '11' as storing the schema in the next byte.

> > 3) Surely choosing the compression algorithm via GUC ala SET
> > toast_compression_algo = ... isn't the way to go. I'd say a storage
> > attribute is more appropriate?
> 
> The way we do caching right now supposes that attoptions will be
> needed only occasionally.  It might need to be revised if we're going
> to need it all the time.  Or else we might need to use a dedicated
> pg_class column.

Good point. It probably belongs right besides attstorage, seems to be
the most consistent choice anyway.

Alternatively, if we only add one form of compression, we can just
always store in snappy/lz4/....

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Amit Langote
Дата:
Сообщение: Re: Regarding GIN Fast Update Technique
Следующее
От: Andres Freund
Дата:
Сообщение: Re: Regarding GIN Fast Update Technique