Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding
| От | Tom Lane | 
|---|---|
| Тема | Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding | 
| Дата | |
| Msg-id | 29770.1511495642@sss.pgh.pa.us обсуждение исходный текст  | 
		
| Ответ на | Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding (Thomas Munro <thomas.munro@enterprisedb.com>) | 
| Ответы | 
                	
            		Re: [HACKERS] HASH_CHUNK_SIZE vs malloc rounding
            		
            		 | 
		
| Список | pgsql-hackers | 
Thomas Munro <thomas.munro@enterprisedb.com> writes:
> On Tue, Nov 29, 2016 at 6:27 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> We could imagine providing an mmgr API function along the lines of "adjust
>> this request size to the nearest thing that can be allocated efficiently".
>> That would avoid the need for callers to know about aset.c overhead
>> explicitly.  I'm not sure how it could deal with platform-specific malloc
>> vagaries though :-(
> Someone pointed out to me off-list that jemalloc's next size class
> after 32KB is in fact 40KB by default[1].  So PostgreSQL uses 25% more
> memory for hash joins than it thinks it does on some platforms.  Ouch.
> It doesn't seem that crazy to expose aset.c's overhead size to client
> code does it?  Most client code wouldn't care, but things that are
> doing something closer to memory allocator work themselves like
> dense_alloc could care.  It could deal with its own overhead itself,
> and subtract aset.c's overhead using a macro.
Seeing that we now have several allocators with different overheads,
I think that exposing this directly to clients is exactly what not to do.
I still like the idea I sketched above of a context-type-specific function
to adjust a request size to something that's efficient.
But there's still the question of how do we know what's an efficient-sized
malloc request.  Is there good reason to suppose that powers of 2 are OK?
        regards, tom lane
		
	В списке pgsql-hackers по дате отправления: