Re: Performance problem in aset.c

Поиск
Список
Период
Сортировка
От JanWieck@t-online.de (Jan Wieck)
Тема Re: Performance problem in aset.c
Дата
Msg-id 200007121127.NAA23451@hot.jw.home
обсуждение исходный текст
Ответ на Re: Performance problem in aset.c  (Alfred Perlstein <bright@wintelcom.net>)
Список pgsql-hackers
Alfred Perlstein wrote:
> * Tom Lane <tgl@sss.pgh.pa.us> [000711 22:23] wrote:
> > Philip Warner <pjw@rhyme.com.au> writes:
> > > Can you maintain one free list for each power of 2 (which it might already
> > > be doing by the look of it), and always allocate the max size for the list.
> > > Then when you want a 10k chunk, you get a 16k chunk, but you know from the
> > > request size which list to go to, and anything on the list will satisfy the
> > > requirement.
> >
> > That is how it works for small chunks (< 1K with the current
> > parameters).  I don't think we want to do it that way for really
> > huge chunks though.
> >
> > Maybe the right answer is to eliminate the gap between small chunks
> > (which basically work as Philip sketches above) and huge chunks (for
> > which we fall back on malloc).  The problem is with the stuff in
> > between, for which we have a kind of half-baked approach...
>
> Er, are you guys seriously layering your own general purpose allocator
> over the OS/c library allocator?
>
> Don't do that!
>
> The only time you may want to do this is if you're doing a special purpose
> allocator like a zone or slab allocator, otherwise it's a pessimization.
> The algorithms you're discussing to fix these leaks have been implemented
> in almost any modern allocator that I know of.
>
> Sorry if i'm totally off base, but "for which we fall back on malloc"
> makes me wonder what's going on here.
   To clearify this:
   I  developed  this  in aset.c because of the fact that we use   alot (really alot) of very small  chunks  beeing
palloc()'d.  Any  allocation  must  be  remembered in some linked lists to   know what to free at memory context reset
ordestruction.  In   the  old  version,  every  however small amount was allocated   using malloc() and remembered
separatelyin one huge List for   the  context.  Traversing  this  list was awfully slow when a   context said bye. And
Isaw no way to speedup this traversal.
 
   With  the  actual concept, only big chunks are remembered for   their  own.   All  small  allocations  aren't
tracked  that   accurately  and  memory  context destruction simply can throw   away all the blocks allocated for it.
 
   At the time I implemented it it gained a speedup of ~10%  for   the  regression  test.  It's  an approach of gaining
speedby   wasting memory.
 


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #




В списке pgsql-hackers по дате отправления:

Предыдущее
От: JanWieck@t-online.de (Jan Wieck)
Дата:
Сообщение: Re: Vacuum only with 20% old tuples
Следующее
От: JanWieck@t-online.de (Jan Wieck)
Дата:
Сообщение: Re: AW: update on TOAST status'