Re: Analysis on backend-private memory usage (and a patch)

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Analysis on backend-private memory usage (and a patch)
Дата
Msg-id 10717.1378390933@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Analysis on backend-private memory usage (and a patch)  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Ответы Re: Analysis on backend-private memory usage (and a patch)  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Список pgsql-hackers
Heikki Linnakangas <hlinnakangas@vmware.com> writes:
> I ran pgbench for ten seconds, and printed the number of tuples in each 
> catcache after that:
> [ very tiny numbers ]

I find these numbers a bit suspicious.  For example, we must have hit at
least 13 different system catalogs, and more than that many indexes, in
the course of populating the syscaches you show as initialized.  How is
it there are only 4 entries in the RELOID cache?  I wonder if there were
cache resets going on.

A larger issue is that pgbench might not be too representative.  In
a quick check, I find that cache 37 (OPERNAMENSP) starts out empty,
and contains 1 entry after "select 2=2", which is expected since 
the operator-lookup code will start by looking for int4 = int4 and
will get an exact match.  But after "select 2=2::numeric" there are
61 entries, as a byproduct of having thumbed through every binary
operator named "=" to resolve the ambiguous match.  We went so far
as to install another level of caching in front of OPERNAMENSP because
it was getting too expensive to deal with heavily-overloaded operators
like that one.  In general, we've had to spend enough sweat on optimizing
catcache searches to make me highly dubious of any claim that the caches
are usually almost empty.

I understand your argument that resizing is so cheap that it might not
matter, but nonetheless reducing these caches as far as you're suggesting
sounds to me to be penny-wise and pound-foolish.  I'm okay with setting
them on the small side rather than on the large side as they are now, but
not with choosing sizes that are guaranteed to result in resizing cycles
during startup of any real app.

> PS. Once the hashes are resized on demand, perhaps we should get rid of 
> the move-to-the-head behavior in SearchCatCache. If all the buckets 

-1.  If the bucket in fact has just one member, dlist_move_head reduces to
just one comparison.  And again I argue that you're optimizing for the
wrong case.  Pure luck will result in some hash chains being (much) longer
than the average, and if we don't do move-to-front we'll get hurt there.
        regards, tom lane



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Noah Misch
Дата:
Сообщение: Re: UNNEST with multiple args, and TABLE with multiple funcs
Следующее
От: Magnus Hagander
Дата:
Сообщение: Re: proposal: Set effective_cache_size to greater of .conf value, shared_buffers