Re: Making joins involving ctid work for the benefit of UPSERT

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: Making joins involving ctid work for the benefit of UPSERT
Дата
Msg-id CAM3SWZS3vYAZUoWkosQQhFiXg8hr3Ep3sSjTKEi-6Hx+yin=-A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Making joins involving ctid work for the benefit of UPSERT  (Kevin Grittner <kgrittn@ymail.com>)
Список pgsql-hackers
On Wed, Jul 23, 2014 at 3:01 PM, Kevin Grittner <kgrittn@ymail.com> wrote:
> Could you clarify that?  Does this mean that you feel that we
> should write to the heap before reading the index to see if the row
> will be a duplicate?  If so, I think that is a bad idea, since this
> will sometimes be used to apply a new data set which hasn't changed
> much from the old, and that approach will perform poorly for this
> use case, causing a lot of bloat.  It certainly would work well for
> the case that most of the rows are expected to be INSERTs rather
> than DELETEs, but I'm not sure that's justification for causing
> extreme bloat in the other cases.

No, I think we should stagger ordinary index insertion in a way that
locks indexes - we lock indexes, then if successful insert a heap
tuple before finally inserting index tuples using the existing
heavyweight page-level index locks. My design doesn't cause bloat
under any circumstances. Heikki's design, which he sketched with an
actual POC implementation involved possible bloating in the event of a
conflict. He also had to go and delete the promise tuple (from within
ExecInsert()) in the event of the conflict before row locking in order
to prevent unprincipled deadlocking. Andres wanted to do something
else similarly involving "promise tuples", where the xid on the
inserter was stored in indexes with a special flag. That could also
cause bloat. I think that could be particularly bad when conflicts
necessitate visiting indexes one by one to kill promise tuples, as
opposed to just killing one heap tuple as in the case of Heikki's
design.

Anyway, both of those designs, and my own design are insert-driven.
The main difference between the design that Heikki sketched and my own
is that mine does not cause bloat, but is more invasive to the nbtree
code (but less invasive to a lot of other places to make the
deadlocking-ultimately-conflicting tuple killing work). But I believe
that Heikki's design is identical to my own in terms of user-visible
semantics. That said, his design was just a sketched and it wouldn't
be fair to hold him to it.

> Also, just a reminder that I'm going to squawk loudly if the
> implementation does not do something fairly predictable and sane
> for the case that the table has more than one UNIQUE index and you
> attempt to UPSERT a row that is a duplicate of one row on one of
> the indexes and a different row on a different index.

Duly noted.  :-)

I think that it's going to have to support that one way or the other.
It might be the case that I'll want to make the choice of unique index
optionally "implicit", but it's clear that we want to be able to
specify a specific unique index in one form or another. Actually, I've
already added that. It's just optional right now. I haven't found a
better way than by just specifying the name of the unique index in
DML, which is ugly, which is the main reason I want to make it
optional. Perhaps we can overcome this.

-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: David G Johnston
Дата:
Сообщение: Re: Making joins involving ctid work for the benefit of UPSERT
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: Doing better at HINTing an appropriate column within errorMissingColumn()