Re: Do we need a ShmList implementation?

Поиск
Список
Период
Сортировка
От Heikki Linnakangas
Тема Re: Do we need a ShmList implementation?
Дата
Msg-id 4C978BC7.1050909@enterprisedb.com
обсуждение исходный текст
Ответ на Re: Do we need a ShmList implementation?  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Список pgsql-hackers
On 20/09/10 19:04, Kevin Grittner wrote:
> Heikki Linnakangas<heikki.linnakangas@enterprisedb.com>  wrote:
>
>> In the SSI patch, you'd also need a way to insert an existing
>> struct into a hash table. You currently work around that by using
>> a hash element that contains only the hash key, and a pointer to
>> the SERIALIZABLEXACT struct. It isn't too bad I guess, but I find
>> it a bit confusing.
>
> Hmmm...  Mucking with the hash table implementation to accommodate
> that seems like it's a lot of work and risk for pretty minimal
> benefit.  Are you sure it's worth it?

No, I'm not sure at all.

>> Well, we generally try to avoid dynamic structures in shared
>> memory, because shared memory can't be resized.
>
> But don't HTAB structures go beyond their estimated sizes as needed?

Yes, but not in a very smart way. The memory allocated for hash table 
elements are never free'd. So if you use up all the "slush fund" shared 
memory for SIREAD locks, it can't be used for anything else anymore, 
even if the SIREAD locks are later released.

>> Any chance of collapsing together entries of already-committed
>> transactions in the SSI patch, to put an upper limit on the number
>> of shmem list entries needed? If you can do that, then a simple
>> array allocated at postmaster startup will do fine.
>
> I suspect it can be done, but I'm quite sure that any such scheme
> would increase the rate of serialization failures.  Right now I'm
> trying to see how much I can do to *decrease* the rate of
> serialization failures, so I'm not eager to go there.  :-/

I see. It's worth spending some mental power on, an upper limit would 
make life a lot easier. It doesn't matter much if it's 2*max_connections 
or 100*max_connections, as long as it's finite.

> If it is
> necessary, the most obvious way to manage this is just to force
> cancellation of the oldest running serializable transaction and
> running ClearOldPredicateLocks(), perhaps iterating, until we free
> an entry to service the new request.

Hmm, that's not very appealing either. But perhaps it's still better 
than not letting any new transactions to begin. We could say "snapshot 
too old" in the error message :-).

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: libpq changes for synchronous replication
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Do we need a ShmList implementation?