Re: WIP: generalized index constraints

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: WIP: generalized index constraints
Дата
Msg-id 407d949e0907071057t59baf8eat4b77ef1c811ea596@mail.gmail.com
обсуждение исходный текст
Ответ на Re: WIP: generalized index constraints  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Tue, Jul 7, 2009 at 6:22 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:
>
> This seems a bit pointless.  There is certainly not any use case for a
> constraint without an enforcement mechanism (or at least none the PG
> community is likely to consider legitimate ;-)).  And it's not very
> realistic to suppose that you'd check a constraint by doing a seqscan
> every time.  Therefore there has to be an index underlying the
> constraint somehow.

I'm not entirely convinced that running a full scan to enforce
constraints is necessarily such a crazy idea. It may well be the most
efficient approach after a major bulk load. And consider a read-only
database where the only purpose of the constraint is to inform the
optimizer that it can trust the property to hold.

That said this seems like an orthogonal issue to me.

> Jeff's complaint about total order is not an
> argument against having an index, it's just pointing out that btree is
> not the only possible type of index.  It's perfectly legitimate to
> imagine using a hash index to enforce uniqueness, for example.  If hash
> indexes had better performance we'd probably already have been looking
> for a way to do that, and wanting some outside-the-AM mechanism for it
> so we didn't have to duplicate code from btree.

I'm a bit at a loss why we need this extra data structure though. The
whole duplicated code issue seems to me to be one largely of code
structure. If we hoisted the heap-value rechecking code out of the
btree AM then the hash AM could reuse it just fine.

Both the hash and btree AMs would have to implement some kind of
"insert-unique-key" operation which would hold some kind of lock
preventing duplicate unique keys from being inserted but both btree
and hash could implement that efficiently by locking one page or one
hash value.

GIST would need something like this "store the key value or tid in
shared memory" mechanism. But that could be implemented as an external
facility which GIST then made use of -- just the way every part of the
system makes use of other parts. It doesn't mean we have to make
"prevent concurrent unique inserts" not the responsibility of the AM
which knows best how to handle that.

--
greg
http://mit.edu/~gsstark/resume.pdf


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: *_collapse_limit, geqo_threshold
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Re: Synch Rep: direct transfer of WAL file from the primary to the standby