Re: Compression of full-page-writes

Поиск
Список
Период
Сортировка
От Michael Paquier
Тема Re: Compression of full-page-writes
Дата
Msg-id CAB7nPqTt2cjxO4cXAhGX20bFMtQRB-oq5eGv6SuqjBftqFtv7g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Compression of full-page-writes  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Список pgsql-hackers
On Tue, Dec 9, 2014 at 5:33 AM, Heikki Linnakangas
<hlinnakangas@vmware.com> wrote:
> On 12/08/2014 09:21 PM, Andres Freund wrote:
>>
>> I still think that just compressing the whole record if it's above a
>> certain size is going to be better than compressing individual
>> parts. Michael argued thta that'd be complicated because of the varying
>> size of the required 'scratch space'. I don't buy that argument
>> though. It's easy enough to simply compress all the data in some fixed
>> chunk size. I.e. always compress 64kb in one go. If there's more
>> compress that independently.
>
>
> Doing it in fixed-size chunks doesn't help - you have to hold onto the
> compressed data until it's written to the WAL buffers.
>
> But you could just allocate a "large enough" scratch buffer, and give up if
> it doesn't fit. If the compressed data doesn't fit in e.g. 3 * 8kb, it
> didn't compress very well, so there's probably no point in compressing it
> anyway. Now, an exception to that might be a record that contains something
> else than page data, like a commit record with millions of subxids, but I
> think we could live with not compressing those, even though it would be
> beneficial to do so.
Another thing to consider is the possibility to control at GUC level
what is the maximum size of a record we allow to compress.
-- 
Michael



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jim Nasby
Дата:
Сообщение: Re: Casting issues with domains
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: Role Attribute Bitmask Catalog Representation