Paul Ramsey <pramsey@cleverelephant.ca> writes: > On Thu, Nov 1, 2018 at 2:29 PM Stephen Frost <sfrost@snowman.net> wrote: >> and secondly, why we wouldn't consider >> handling a non-zero offset. A non-zero offset would, of course, still >> require decompressing from the start and then just throwing away what we >> skip over, but we're going to be doing that anyway, aren't we? Why not >> stop when we get to the end, at least, and save ourselves the trouble of >> decompressing the rest and then throwing it away.
> I was worried about changing the pg_lz code too much because it scared me, > but debugging some stuff made me read it more closely so I fear it less > now, and doing interior slices seems not unreasonable, so I will give it a > try.
I think Stephen was just envisioning decompressing from offset 0 up to the end of what's needed, and then discarding any data before the start of what's needed; at least, that's what'd occurred to me.
Understood, that makes lots of sense and is a very small change, it turns out :)
Allocating just what is needed also makes things faster yet, which is nice, and no big surprise.
Some light testing seems to show no obvious regression in speed of decompression for the usual "decompress it all" case.