Re: Using per-transaction memory contexts for storing decoded tuples
От | Masahiko Sawada |
---|---|
Тема | Re: Using per-transaction memory contexts for storing decoded tuples |
Дата | |
Msg-id | CAD21AoCJ=cRqQ_WYuE9-K4whqmtaQbfu2NUa-TF57MAWx+VvgA@mail.gmail.com обсуждение исходный текст |
Ответ на | RE: Using per-transaction memory contexts for storing decoded tuples ("Hayato Kuroda (Fujitsu)" <kuroda.hayato@fujitsu.com>) |
Ответы |
Re: Using per-transaction memory contexts for storing decoded tuples
|
Список | pgsql-hackers |
On Wed, Oct 2, 2024 at 9:42 PM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote: > > Dear Sawada-san, Amit, > > > > So, decoding a large transaction with many smaller allocations can > > > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In > > > real workloads, we will have fewer such large transactions or a mix of > > > small and large transactions. That will make the overhead much less > > > visible. Does this mean that we should invent some strategy to defrag > > > the memory at some point during decoding or use any other technique? I > > > don't find this overhead above the threshold to invent something > > > fancy. What do others think? > > > > I agree that the overhead will be much less visible in real workloads. > > +1 to use a smaller block (i.e. 8kB). It's easy to backpatch to old > > branches (if we agree) and to revert the change in case something > > happens. > > I also felt okay. Just to confirm - you do not push rb_mem_block_size patch and > just replace SLAB_LARGE_BLOCK_SIZE -> SLAB_DEFAULT_BLOCK_SIZE, right? Right. > It seems that > only reorderbuffer.c uses the LARGE macro so that it can be removed. I'm going to keep the LARGE macro since extensions might be using it. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
В списке pgsql-hackers по дате отправления: