Hello,
On Thu, 2020-03-19 at 14:36 -0700, pabloa98 wrote:
> Hello,
>
> My schema requires a counter for each combination of 2 values.
> Something like:
>
> CREATE TABLE counter(
> group INT NOT NULL,
> element INT NOT NULL,
> seq_number INT NOT NULL default 0,
> CONSTRAINT PRIMARY KEY (group, element)
> );
>
> For each entry in counter, aka for each (group, element) pair, the
> model requires a seq_number.
>
> If I use a table "counter", I could still have counter collisions
> between 2 transactions. I need truly sequence behavior. Is that
> possible by using a table like "counter" table, where the counter
> could be increased out of the transaction so it performs as a
> sequence without having race conditions between concurrent
> transactions?
>
> The other option is to create sequences for each new pair of (group,
> element) using triggers. There are millions of pairs. So this
> approach will generate millions of sequences.
>
> How a PostgreSQL database would behave having millions of sequences
> in a schema? Would it degrade its performance? Is there any negative
> impact?
>
> Regards
>
> Pablo
>
>
>
Have you tried the following:-
BEGIN;
SELECT (seq_number + 1) FROM counter WHERE group = $1
AND element = $2 FOR UPDATE;
If that gives a NOTFOUND exception, then you need to insert into table
counter using a seq_number = 1, or update it with (seq_number + 1).
Then do the insert into whatever table needs the seq_number value.
If no errors detected, then COMMIT else ROLLBACK.
It is all in one transaction block.
We use this approach to assign invoice numbers.
You could pre-seed table counter when a group/element pair is created
for the first time.
HTH,
Robert