Re: Could postgres12 support millions of sequences? (like 10 million)

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: Could postgres12 support millions of sequences? (like 10 million)
Дата
Msg-id 73f5c4ac-1bd3-d517-f507-74d042ae1365@aklaver.com
обсуждение исходный текст
Ответ на Re: Could postgres12 support millions of sequences? (like 10 million)  (pabloa98 <pabloa98@gmail.com>)
Ответы Re: Could postgres12 support millions of sequences? (like 10 million)
Список pgsql-general
On 3/21/20 11:08 AM, pabloa98 wrote:
> 
>      > As to below that is going to require more thought.
>      >
>     Still no word on the actual requirement. As someone who believes
>     consecutive numbers on digital invoices is simply a mistaken
>     interpretation of the paper based system, I suspect a similar error
>     here. But again we haven’t really heard, far as I know. Something
>     really fishy about 99999999.
>      >
> 
> It is fishy. This is the thing. The code, element, and group is part of 
> a bigger code called item identifier (or ii).
> 
> An item identifier is a result of:  code || group || element || 
> qualifier (a constant like 55) || check digit coming from some check 
> algorithm.
> 
> For example:
> 
> for a triplet (group, element, code) like (1, 3, 63) the item identifier 
> (or ii) is: 630010003558  (the last 8 is the check digit).
> This number is converted to a bigint and stored (and used as PK or FK on 
> other tables, etc, etc).
> 
> In an item identifier the room is assigned like:

Revised, to make it match above identifier:

> 
 > 8 digits for code (not padded with 0s)
> 3 digits for group
> 4 digits for element
> 2 digits for qualifier
> 1 digit for the check digit.
> -----------------------------
> 18 digits for item identifier.
> 


So the question may actually be:

How do we improve our locking code, so we don't have to spawn millions 
of sequences?

What is the locking method you are using?


> The lock part is because we solved a similar problem with a counter by 
> row locking the counter and increasing it in another part of the 
> database. The result is that all the queries using that table are queued 
> by pair (group, element) that is not that bad because we are not 
> inserting thousands of rows by second. Still is killing cluster 
> performance (but performance is still OK from the business point of 
> view). The problem using locks is that they are too sensitive to 
> developer errors and bugs. Sometimes connected clients aborts and the 
> connection is returned to the pool with the lock active until the 
> connection is closed or someone unlocks the row. I would prefer to have 
> something more resilient to developers/programming errors, if possible.
> 
> 
> 
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com



В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Duplicate key violation on upsert
Следующее
От: Matt Magoffin
Дата:
Сообщение: Re: Duplicate key violation on upsert