Re: SSI bug?

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: SSI bug?
Дата
Msg-id 4D63F869020000250003AE82@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: SSI bug?  (Dan Ports <drkp@csail.mit.edu>)
Ответы Re: SSI bug?  (Dan Ports <drkp@csail.mit.edu>)
Список pgsql-hackers
Dan Ports <drkp@csail.mit.edu> wrote:
> On Tue, Feb 22, 2011 at 10:51:05AM -0600, Kevin Grittner wrote:
> The theory was before that the local lock table would only have
> false negatives, i.e. if it says we hold a lock then we really do.
> That makes it a useful heuristic because we can bail out quickly
> if we're trying to re-acquire a lock we already hold. It seems
> worthwhile to try to preserve that.
> 
> This property holds as long as the only time one backend edits
> another's lock list is to *add* a new lock, not remove an existing
> one.  Fortunately, this is true most of the time (at a glance, it
> seems to be the case for the new tuple update code).  
> 
> There are two exceptions; I think they're both OK but we need to
> be careful here.
>  - if we're forced to promote an index page lock's granularity to
>    avoid running out of shared memory, we remove the fine-grained
>    one. This is fine as long as we relax our expectations to "if
>    the local lock table says we hold a lock, then we hold that
>    lock or one that covers it", which is acceptable for current
>    users of the table.
That makes sense.  This one sounds OK.
>  - if we combine two index pages, we remove the lock entry for the
>    page being deleted. I think that's OK because the page is being
>    removed so we will not make any efforts to lock it.
I'm not sure it's safe to assume that the index page won't get
reused before the local lock information is cleared.  In the absence
of a clear proof that it is safe, or some enforcement mechanism to
ensure that it is, I don't think we should make this assumption. 
Off-hand I can't think of a clever way to make this safe which would
cost less than taking out the LW lock and checking the definitive
shared memory HTAB, but that might be for lack of creative thinking
at the moment..
>> Yeah, as far as a I can recall the only divergence was in *page*
>> level entries for *indexes* until this latest patch.  We now have
>> *tuple* level entries for *heap* relations, too.
> 
> *nod*. I'm slightly concerned about the impact of that on
> granularity promotion -- the new locks created by heap updates
> won't get counted toward the lock promotion thresholds. That's not
> a fatal problem of anything, but it could make granularity
> promotion less effective at conserving lock entries.
This pattern doesn't seem to come up very often in most workloads. 
Since it's feeding into a heuristic which already triggers pretty
quickly I think we should be OK with this.  It makes me less tempted
to tinker with the threshold for promoting tuple locks to page
locks, though.
The only alternative I see would be to use some form of asynchronous
notification of the new locks so that the local table can be
maintained.  That seems overkill without some clear evidence that it
is needed.  I *really* wouldn't want to go back to needing LW locks
to maintain this info; as you know (and stated only for the benefit
of the list), it was a pretty serious contention point in early
profiling and adding the local table was a big part of getting an
early benchmark down from a 14+% performance hit for SSI to a 1.8%
performance hit.
-Kevin


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: How to extract a value from a record using attnum or attname?
Следующее
От: Tom Lane
Дата:
Сообщение: Re: review: FDW API