Re: LargeObject API and OIDs
| От | Tom Lane |
|---|---|
| Тема | Re: LargeObject API and OIDs |
| Дата | |
| Msg-id | 17821.1098649614@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | LargeObject API and OIDs (Christian Niles <christian@unit12.net>) |
| Ответы |
Re: LargeObject API and OIDs
|
| Список | pgsql-jdbc |
Christian Niles <christian@unit12.net> writes:
> However, since a versioning system will have a higher number of entries
> compared to a normal storage system, I'm curious if there's any chance
> for data corruption in the case that the DB runs out of OIDs. Ideally,
> the database would raise an exception, and leave the existing data
> untouched. From what I've read in the documentation, OIDs aren't
> guaranteed to be unique, and may cycle. In this case, would the first
> large object after the limit overwrite the first object?
No; instead you'd get a failure during lo_create:
/* Check for duplicate (shouldn't happen) */
if (LargeObjectExists(file_oid))
elog(ERROR, "large object %u already exists", file_oid);
You could deal with this by retrying lo_create until it succeeds.
However, if you are expecting more than a few tens of millions of
objects, you probably don't want to go this route because the
probability of collision will be too high; you could spend a long time
iterating to find a free OID. Something involving a bigint identifier
would work better.
> Also, would
> the number of large objects available be limited by other database
> objects that use OIDs?
No. Although we use just a single OID sequence generator, each
different kind of system object has a separate unique index (or other
enforcement mechanism), so it doesn't really matter if, say, an OID in
use for a large object is also in use for a table.
regards, tom lane
В списке pgsql-jdbc по дате отправления: