RE: Using per-transaction memory contexts for storing decoded tuples

Поиск
Список
Период
Сортировка
От Hayato Kuroda (Fujitsu)
Тема RE: Using per-transaction memory contexts for storing decoded tuples
Дата
Msg-id TYAPR01MB5692177C9AA8A7433654009BF5712@TYAPR01MB5692.jpnprd01.prod.outlook.com
обсуждение исходный текст
Ответ на Re: Using per-transaction memory contexts for storing decoded tuples  (Masahiko Sawada <sawada.mshk@gmail.com>)
Ответы Re: Using per-transaction memory contexts for storing decoded tuples
Список pgsql-hackers
Dear Sawada-san, Amit,

> > So, decoding a large transaction with many smaller allocations can
> > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > real workloads, we will have fewer such large transactions or a mix of
> > small and large transactions. That will make the overhead much less
> > visible. Does this mean that we should invent some strategy to defrag
> > the memory at some point during decoding or use any other technique? I
> > don't find this overhead above the threshold to invent something
> > fancy. What do others think?
> 
> I agree that the overhead will be much less visible in real workloads.
> +1 to use a smaller block (i.e. 8kB). It's easy to backpatch to old
> branches (if we agree) and to revert the change in case something
> happens.

I also felt okay. Just to confirm - you do not push rb_mem_block_size patch and
just replace SLAB_LARGE_BLOCK_SIZE -> SLAB_DEFAULT_BLOCK_SIZE, right? It seems that
only reorderbuffer.c uses the LARGE macro so that it can be removed.

Best regards,
Hayato Kuroda
FUJITSU LIMITED


В списке pgsql-hackers по дате отправления: