Re: Configurable FP_LOCK_SLOTS_PER_BACKEND

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: Configurable FP_LOCK_SLOTS_PER_BACKEND
Дата
Msg-id 6a683f7a-5952-d832-8a50-bda3a0c7455c@enterprisedb.com
обсуждение исходный текст
Ответ на Re: Configurable FP_LOCK_SLOTS_PER_BACKEND  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Configurable FP_LOCK_SLOTS_PER_BACKEND
Re: Configurable FP_LOCK_SLOTS_PER_BACKEND
Список pgsql-hackers

On 8/7/23 21:21, Robert Haas wrote:
> On Mon, Aug 7, 2023 at 3:02 PM Tomas Vondra
> <tomas.vondra@enterprisedb.com> wrote:
>>> I would also argue that the results are actually not that great,
>>> because once you get past 64 partitions you're right back where you
>>> started, or maybe worse off. To me, there's nothing magical about
>>> cases between 16 and 64 relations that makes them deserve special
>>> treatment - plenty of people are going to want to use hundreds of
>>> partitions, and even if you only use a few dozen, this isn't going to
>>> help as soon as you join two or three partitioned tables, and I
>>> suspect it hurts whenever it doesn't help.
>>
>> That's true, but doesn't that apply to any cache that can overflow? You
>> could make the same argument about the default value of 16 slots - why
>> not to have just 8?
> 
> Yes and no. I mean, there are situations where when the cache
> overflows, you still get a lot of benefit out of the entries that you
> are able to cache, as when the frequency of access follows some kind
> of non-uniform distribution, Zipfian or decreasing geometrically or
> whatever. There are also situations where you can just make the cache
> big enough that as a practical matter it's never going to overflow. I
> can't think of a PostgreSQL-specific example right now, but if you
> find that a 10-entry cache of other people living in your house isn't
> good enough, a 200-entry cache should solve the problem for nearly
> everyone alive. If that doesn't cause a resource crunch, crank up the
> cache size and forget about it. But here we have neither of those
> situations. The access frequency is basically uniform, and the cache
> size needed to avoid overflows seems to be unrealistically large, at
> least given the current design. So I think that in this case upping
> the cache size figures to be much less effective than in some other
> cases.
> 

Why would the access frequency be uniform? In particular, there's a huge
variability in how long the locks need to exist - IIRC we may be keeping
locks for tables for a long time, but not for indexes. From this POV it
might be better to do fast-path locking for indexes, no?

> It's also a bit questionable whether "cache" is even the right word
> here. I'd say it isn't, because it's not like the information in the
> fast-path locking structures is a subset of the full information
> stored elsewhere. Whatever information is stored there is canonical
> for those entries.
> 

Right. Calling this a cache might be a bit misleading.

>> Yes, I agree. I don't know if this particular design would be the right
>> one (1000 elements seems a bit too much for something included right in
>> PGPROC). But yeah, something that flips from linear search to something
>> else would be reasonable.
> 
> Yeah ... or there could be a few slots in the PGPROC and then a bit
> indicating whether to jump to a larger shared memory structure located
> in a separate array. Not sure exactly.
> 

Maybe, but isn't that mostly what the regular non-fast-path locking
does? Wouldn't that defeat the whole purpose of fast-path locking?

regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tomas Vondra
Дата:
Сообщение: Re: Use of additional index columns in rows filtering
Следующее
От: Jeff Davis
Дата:
Сообщение: Re: Faster "SET search_path"