Re: Could postgres12 support millions of sequences? (like 10 million)

Поиск
Список
Период
Сортировка
От pabloa98
Тема Re: Could postgres12 support millions of sequences? (like 10 million)
Дата
Msg-id CAEjudX5+EShvXueTJ1xdyWhViHRievhXv+k86reAPjmhEDMXzg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Could postgres12 support millions of sequences? (like 10 million)  (Rob Sargent <robjsargent@gmail.com>)
Ответы Re: Could postgres12 support millions of sequences? (like 10 million)
Список pgsql-general

> As to below that is going to require more thought.
>
Still no word on the actual requirement. As someone who believes consecutive numbers on digital invoices is simply a mistaken interpretation of the paper based system, I suspect a similar error here. But again we haven’t really heard, far as I know. Something really fishy about 99999999.
>
It is fishy. This is the thing. The code, element, and group is part of a bigger code called item identifier (or ii).

An item identifier is a result of:  code || group || element || qualifier (a constant like 55) || check digit coming from some check algorithm.

For example:

for a triplet (group, element, code) like (1, 3, 63) the item identifier (or ii) is: 630010003558  (the last 8 is the check digit).
This number is converted to a bigint and stored (and used as PK or FK on other tables, etc, etc).

In an item identifier the room is assigned like:

3 digits for group
4 digits for element
8 digits for code (not padded with 0s)
2 digits for qualifier
1 digit for the check digit.
-----------------------------
18 digits for item identifier.

And that is why we have 8 digits maximum for the code. So when a "code" is generated, it is important that there are no collisions, no big gaps (little gaps are OK, but because we cannot waste too many codes, keeping gaps small is important) and no locks.

The lock part is because we solved a similar problem with a counter by row locking the counter and increasing it in another part of the database. The result is that all the queries using that table are queued by pair (group, element) that is not that bad because we are not inserting thousands of rows by second. Still is killing cluster performance (but performance is still OK from the business point of view). The problem using locks is that they are too sensitive to developer errors and bugs. Sometimes connected clients aborts and the connection is returned to the pool with the lock active until the connection is closed or someone unlocks the row. I would prefer to have something more resilient to developers/programming errors, if possible.




В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Duplicate key violation on upsert
Следующее
От: pabloa98
Дата:
Сообщение: Re: Could postgres12 support millions of sequences? (like 10 million)