Re: Optimize partial TOAST decompression
От | Tomas Vondra |
---|---|
Тема | Re: Optimize partial TOAST decompression |
Дата | |
Msg-id | 20191114130055.dkrjcveigeviil7n@development обсуждение исходный текст |
Ответ на | Re: Optimize partial TOAST decompression (Rushabh Lathia <rushabh.lathia@gmail.com>) |
Ответы |
Re: Optimize partial TOAST decompression
|
Список | pgsql-hackers |
On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote: >Today I noticed strange behaviour, consider the following test: > >postgres@126111=#create table foo ( a text ); >CREATE TABLE >postgres@126111=#insert into foo values ( repeat('PostgreSQL is the >world''s best database and leading by an Open Source Community.', 8000)); >INSERT 0 1 > >postgres@126111=#select substring(a from 639921 for 81) from foo; > substring >----------- > >(1 row) > Hmmm. I think the issue is heap_tuple_untoast_attr_slice is using the wrong way to determine compressed size in the VARATT_IS_EXTERNAL_ONDISK branch. It does this max_size = pglz_maximum_compressed_size(sliceoffset + slicelength, TOAST_COMPRESS_SIZE(attr)); But for the example you've posted TOAST_COMPRESS_SIZE(attr) returns 10, which is obviously bogus because the TOAST table contains ~75kB of data. I think it should be doing this instead: max_size = pglz_maximum_compressed_size(sliceoffset + slicelength, toast_pointer.va_extsize); At least that fixes it for me. I wonder if this actually explains the crashes 540f3168091 was supposed to fix, but it just masked them instead. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Вложения
В списке pgsql-hackers по дате отправления: