Re: profiling connection overhead

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: profiling connection overhead
Дата
Msg-id AANLkTikXMdR9-YsBq5oJkSk2Ua-t-78_E_CmBs-R=v0K@mail.gmail.com
обсуждение исходный текст
Ответ на Re: profiling connection overhead  (Andres Freund <andres@anarazel.de>)
Ответы Re: profiling connection overhead  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: profiling connection overhead  (Andres Freund <andres@anarazel.de>)
Re: profiling connection overhead  (Bruce Momjian <bruce@momjian.us>)
Список pgsql-hackers
On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund <andres@anarazel.de> wrote:
> On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
>> On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> > Robert Haas <robertmhaas@gmail.com> writes:
>> >> Full results, and call graph, attached.  The first obvious fact is
>> >> that most of the memset overhead appears to be coming from
>> >> InitCatCache.
>> >
>> > AFAICT that must be the palloc0 calls that are zeroing out (mostly)
>> > the hash bucket headers.  I don't see any real way to make that cheaper
>> > other than to cut the initial sizes of the hash tables (and add support
>> > for expanding them later, which is lacking in catcache ATM).  Not
>> > convinced that that creates any net savings --- it might just save
>> > some cycles at startup in exchange for more cycles later, in typical
>> > backend usage.
>> >
>> > Making those hashtables expansible wouldn't be a bad thing in itself,
>> > mind you.
>>
>> The idea I had was to go the other way and say, hey, if these hash
>> tables can't be expanded anyway, let's put them on the BSS instead of
>> heap-allocating them.  Any new pages we request from the OS will be
>> zeroed anyway, but with palloc we then have to re-zero the allocated
>> block anyway because palloc can return a memory that's been used,
>> freed, and reused.  However, for anything that only needs to be
>> allocated once and never freed, and whose size can be known at compile
>> time, that's not an issue.
>>
>> In fact, it wouldn't be that hard to relax the "known at compile time"
>> constraint either.  We could just declare:
>>
>> char lotsa_zero_bytes[NUM_ZERO_BYTES_WE_NEED];
>>
>> ...and then peel off chunks.
> Won't this just cause loads of additional pagefaults after fork() when those
> pages are used the first time and then a second time when first written to (to
> copy it)?

Aren't we incurring those page faults anyway, for whatever memory
palloc is handing out?  The heap is no different from bss; we just
move the pointer with sbrk().

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: profiling connection overhead
Следующее
От: Tom Lane
Дата:
Сообщение: Regex code versus Unicode chars beyond codepoint 255