Karel wrote:
> > Why is memory context per plan bad ?
>
> One context is more simple.
I don't see much complexity difference between one context per plan vs. one context for all. At least if
we do it transparently inside of SPI_saveplan() and SPI_freeplan().
> We talking about a *cache*. If exist interface for this cache and
> all operations are with copy/freeObject it not has restriction.
>
> For how action it will restriction?
No restrictions I can see.
But I think one context per plan is still better, since first there is no leakage/multiref problem. Second,
there is a performance difference between explicitly pfree()'ing hundreds of small allocations (in
freeObject()traverse), and just destroying a context. The changes I made to the MemoryContextAlloc stuff
forv6.5 (IIRC), using bigger blocks incl. padding/reuse for small allocations, caused a speedup of 5+% for the
entireregression test. This was only because it uses lesser real calls to malloc()/free() and the context
destructiondoes not need to process a huge list of all, however small allocations anymore. It simply throws
awayall blocks now.
This time, we talk about a more complex, recursive freeObject(), switch()'ing for every node type into
separate, per object type specific functions, pfree()'ing all the little chunks. So there is at least
a difference in first/second-level RAM cache rows required. And if that can simply be avoided by using one
contextper plan, I vote for 1by1.
Then again, copyObject/freeObject must be fixed WRT leakage/multiref anyway.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#========================================= wieck@debis.com (Jan Wieck) #