Re: WIP: generalized index constraints

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: WIP: generalized index constraints
Дата
Msg-id 407d949e0907060428l47d4e4a3r805159e2443ff178@mail.gmail.com
обсуждение исходный текст
Ответ на Re: WIP: generalized index constraints  (Simon Riggs <simon@2ndQuadrant.com>)
Ответы Re: WIP: generalized index constraints  (Jeff Davis <pgsql@j-davis.com>)
Список pgsql-hackers
On Mon, Jul 6, 2009 at 11:56 AM, Simon Riggs<simon@2ndquadrant.com> wrote:
> How will you cope with a large COPY? Surely there can be more than one
> concurrent insert from any backend?

He only needs to handle inserts for the period they're actively being
inserted into the index. Once they're in the index he'll find them
using the index scan. In other words this is all a proxy for the way
btree locks index pages while it looks for a unique key violation.

I'm a bit concerned about the use of tid. You might have to look at a
lot of heap pages to check for conflicts. I suppose they're almost
certainly all in shared memory though. Also, it sounds like you're
anticipating the possibility of dead entries in the array but if you
do then you need to store the xmin also to protect against a tuple
that's been vacuumed and had its line pointer reused since. But I
don't see the necessity for that anyways since you can just clean up
the entry on abort.


-- 
greg
http://mit.edu/~gsstark/resume.pdf


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: WIP: generalized index constraints
Следующее
От: Itagaki Takahiro
Дата:
Сообщение: ALTER SET DISTINCT vs. Oracle-like DBMS_STATS