Re: Solving the OID-collision problem

Поиск
Список
Период
Сортировка
От Simon Riggs
Тема Re: Solving the OID-collision problem
Дата
Msg-id 1123586488.3670.533.camel@localhost.localdomain
обсуждение исходный текст
Ответ на Re: Solving the OID-collision problem  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: Solving the OID-collision problem  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Mon, 2005-08-08 at 19:50 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > On Mon, 2005-08-08 at 16:55 -0400, Tom Lane wrote:
> >> Considering we don't even have code to do this, much less have expended
> >> one day of beta testing on it, back-patching seems a bit premature.
> 
> > You provided a patch and explained your testing of it. It seems to be a
> > useful test to me, and as I said a practical solution to OID wrap.
> 
> I didn't provide a patch --- I provided a proof-of-concept hack that
> covered just two of the seventeen catalogs with OIDs (and not every case
> even for those two).  A real patch would likely be much more invasive
> than this, anyway, because we'd want to fix things so that you couldn't
> accidentally forget to use the free-OID-finding code.

OK, I see where you are coming from now. But I also see it is a much
bigger problem than it was before.

We either need to have a special routine for each catalog table, or we
scan all tables, all of the time. The latter is a disaster, so lets look
at the former: spicing the code with appropriate catalog checks would be
a lot of work and probably very error prone and hard to maintain. We
would never be sure that any particular check had been done
appropriately.

Different proposal: 
1. When we wrap we set up an OID Free Space Map. We do this once when we
wrap, rather than every time we collide. We scan all catalog tables and
set the bits in a single 8192 byte block and write it out to disk. We
then allocate OIDs from completely untouched chunks, otherwise much as
we do now, except for the occasional re-allocation of a chunk every
32768 OIDs. In theory, we will never collide on permanent catalog
entries. (If the OIDFSM is not there, we would assume we haven't wrapped
yet).
2. We segment the available OID space, to encourage temporary object
types not to overlap.

The first feature is designed to simplify the OID checking, so that we
don't need to add lots of additional code: we can isolate this code. It
also performs much better. The segmentation of the OID space mitigates
against the possibility that we might use all the bits in the FSM,
making it much more unlikely (and so I would propose not to plug that
gap).

When creating temporary objects, they should start at
FirstTemporaryObjectId (max/2) and continue up to the max. When they hit
*their* max, they cycle back round to FirstTemporaryObjectId.
If we collide on an OID, then we issue another one.

The "main" space would then be available for use by all other objects. 

The OIDFSM is much the same as the CLOG, so perhaps we might even reuse
that code. However, since the OIDFSM is so rarely used, there seems less
need to cache it, so that would probably be overkill. Since, as I think
I've mentioned :-) , we should backpatch this to 7.3, then we wouldn't
be able to do that if we used the slru.c approach. (nor would we be able
to implement the second feature, temp OID zoning).

Since OIDs are already xlogged we need not write anything differently
there. We would need to update the recovery code to maintain the OIDFSM,
though it would probably be wise to rebuild it completely after a PITR.

Best Regards, Simon Riggs



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Magnus Hagander"
Дата:
Сообщение: Re: Simplifying wal_sync_method
Следующее
От: Jake Stride
Дата:
Сообщение: Re: MySQL to PostgreSQL for SugarCRM