Re: Optimize partial TOAST decompression

Поиск
Список
Период
Сортировка
От Rushabh Lathia
Тема Re: Optimize partial TOAST decompression
Дата
Msg-id CAGPqQf3XeP8V9HEoHOUmejXnY+fuhBobPcHccrDQZ+wrZMiTFQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Optimize partial TOAST decompression  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Список pgsql-hackers


On Thu, Nov 14, 2019 at 6:30 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:
>Today I noticed strange behaviour, consider the following test:
>
>postgres@126111=#create table foo ( a text );
>CREATE TABLE
>postgres@126111=#insert into foo values ( repeat('PostgreSQL is the
>world''s best database and leading by an Open Source Community.', 8000));
>INSERT 0 1
>
>postgres@126111=#select substring(a from 639921 for 81) from foo;
> substring
>-----------
>
>(1 row)
>

Hmmm. I think the issue is heap_tuple_untoast_attr_slice is using the
wrong way to determine compressed size in the VARATT_IS_EXTERNAL_ONDISK
branch. It does this

     max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,
                                             TOAST_COMPRESS_SIZE(attr));

But for the example you've posted TOAST_COMPRESS_SIZE(attr) returns 10,
which is obviously bogus because the TOAST table contains ~75kB of data.

I think it should be doing this instead:

     max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,
                                             toast_pointer.va_extsize);

At least that fixes it for me.

I wonder if this actually explains the crashes 540f3168091 was supposed
to fix, but it just masked them instead.


I tested the attached patch and that fixes the issue for me.

Thanks,


--
Rushabh Lathia

В списке pgsql-hackers по дате отправления:

Предыдущее
От: amul sul
Дата:
Сообщение: Re: [HACKERS] advanced partition matching algorithm forpartition-wise join
Следующее
От: Fabien COELHO
Дата:
Сообщение: Re: segfault in geqo on experimental gcc animal