Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE
Дата
Msg-id CAM3SWZQaASKymDtsZcsCpSOP2_SHhtsZcMLFnnoJL26RLzvoAw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE  (Peter Geoghegan <pg@heroku.com>)
Ответы Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE  (Peter Geoghegan <pg@heroku.com>)
Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Список pgsql-hackers
On Fri, Jan 10, 2014 at 7:59 PM, Peter Geoghegan <pg@heroku.com> wrote:
>> *shrug*. I'm not too concerned about performance during contention. But
>> let's see how this fixed version performs. Could you repeat the tests you
>> did with this?
>
> Why would you not be too concerned about the performance with
> contention? It's a very important aspect. But even if you don't, if
> you look at the transaction throughput with only a single client in
> the update-heavy benchmark [1] (with one client there is a fair mix of
> inserts and updates), my approach still comes out far ahead.
> Transaction throughput is almost 100% higher, with the *difference*
> exceeding 150% at 8 clients but never reaching too much higher. I
> think that the problem isn't so much with contention between clients
> as much as with contention between inserts and updates, which affects
> everyone to approximately the same degree. And the average max latency
> across runs for one client is 130.447 ms, as opposed to 0.705 ms with
> my patch - that's less than 1%. Whatever way you cut it, the
> performance of my approach is far superior. Although we should
> certainly investigate the impact of your most recent revision, and I
> intend to, how can you not consider those differences to be extremely
> significant?

So I re-ran the same old benchmark, where we're almost exclusively
updating. Results for your latest revision were very similar to my
patch:

http://postgres-benchmarks.s3-website-us-east-1.amazonaws.com/exclusion-no-deadlock/

This suggests that the main problem encountered was lock contention
among old, broken promise tuples. Note that this benchmark doesn't
involve any checkpointing, and everything fits in memory.
Opportunistic pruning is possible, which I'd imagine helps a lot with
the bloat, at least in this benchmark - there are only every 100,000
live tuples. That might not always be true, of course.

In any case, my patch is bound to win decisively for the other
extreme, the insert-only case, because the overhead of doing an index
scan first is always wasted there with your approach, and the overhead
of extended btree leaf page locking has been shown to be quite low. In
the past you've spoken of avoiding that overhead through an adaptive
strategy based on statistics, but I think you'll have a hard time
beating a strategy where the decision comes as late as possible, and
is informed by highly localized page-level metadata already available.
My implementation can abort an attempt to just read an existing
would-be duplicate very inexpensively (with no strong locks), going
back to just after the _bt_search() to get a heavyweight lock if just
reading doesn't work out (if there is no duplicate found), so as to
not waste all of its prior work. Doing one of the two extremes of
insert-mostly or update-only well is relatively easy; dynamically
adapting to one or the other is much harder. Especially if it's a
consistent mix of inserts and updates, where general observations
aren't terribly useful.

All other concerns of mine still remain, including the concern over
the extra locking of the proc array - I'm concerned about the
performance impact of that on other parts of the system not exercised
by this test.

-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: Add CREATE support to event triggers
Следующее
От: Peter Geoghegan
Дата:
Сообщение: Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE