Re: Using a postgres table to maintain unique id?

Поиск
Список
Период
Сортировка
От M.Feldtmann@t-online.de (Marten Feldtmann)
Тема Re: Using a postgres table to maintain unique id?
Дата
Msg-id 3A119C85.4ED7884D@toppoint.de
обсуждение исходный текст
Ответ на Re: Using a postgres table to maintain unique id?  (Poet/Joshua Drake <poet@linuxports.com>)
Список pgsql-sql

Steve Wampler schrieb:
> 
> Yes, but...
> 
> (1) The application I have is composed of about 50 processes
>     running on 3 different OS/architectures (Linux/intel,
>     Solaris/sparc, and VxWorks/ppc).  The IDs I need must be
>     unique across all processes (I suppose one solution would
>     be to provide each ID with a unique prefix based on the
>     process that is running, but...)
We've build a document management system using this system 
and the clients all created ids are based on the a kind of high-low 
algorithm to create unique indices.
The indices are unique among all possible clients ... the 
number of clients does not matter. As I said before: better
than any hardwired solution.
You have two database queries among the normal lifetime
of a client to get the base information to create unique 
clients .. during the lifetime the ids are created offline
and they are garanteed to be unique.
Actually we're now in the process to build an object-oriented
PPD system and we use the same algorithm again.

> 
> (2) Some of these systems are real-time boxes that might get
>     rebooted at any moment, or might hang for hardware-related
>     reasons [I'd like to able to say that all of the processes
>     could detect imminent failure, but unfortunately, I can't].
>     So determining when a client "finishes" is not always possible,
>     which prevents (he claims) the above solution from claiming
>     ID uniqueness.
>
It does not matter until your machines do not reboot every second
but even then you may get along for ten or 20 years before you
ran out of indices.
> (where N might be < 1.0).  This, while still not guaranteeing
> uniqueness, would at least come pretty close...  It would still be
> nice to avoid having to VACUUM ANALYZE this table, though, and it
The base idea for all of it is simple:
The unique id is based on three integer numbers:
a) id-1 is a class id number (16 bit ?)b) id-2 is a global-session-number (32 bit): nc) id-3 is a local-session-number
(32bit): x
 
The id-3, id-2 and id-1 are converted to the base 36 and by this
they are converted to strings. The result unique id is about
15 characters long. (6+6+3)
We need a table to hold pairs of "global-id, local-id", this table
is initially empty.
When a client starts, it connects to the database, lockes this
table and now the following happens:
a) if the table is empty, the client uses (1,0) for its own   and stores (2,0) for the next client into the table.
b) if the table has ONE entry, the client removes the pair (n,x)   from the table and stores (n+1,0) into the table.
c) if the table has more than one entry, the client takes any   entry (normaly the one with the lowest n) from the
tableand   removes it.
 
d) the client unlocks the table
Now the client is able to create offline up to 2^32 new unique
identifiers. Increasing the numbers above and you get even more
possible values.
They create unique identifieres like (n,x), (n,x+1), ...
If the client reaches this limit during lifetime it does the 
above again.
If the client terminates, it writes it actual pair into this
table.
Ok, that's it.
If you want to have more information ... just contact me.


Marten


В списке pgsql-sql по дате отправления:

Предыдущее
От: "Mikheev, Vadim"
Дата:
Сообщение: RE: how to continue a transaction after an error?
Следующее
От: Antti Linno
Дата:
Сообщение: Postgres 7.0.X and arrow keys