Re: Could postgres12 support millions of sequences? (like 10 million)

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: Could postgres12 support millions of sequences? (like 10 million)
Дата
Msg-id e92daed5-06aa-e802-681c-9bf11f5bf4f1@aklaver.com
обсуждение исходный текст
Ответ на Re: Could postgres12 support millions of sequences? (like 10 million)  (pabloa98 <pabloa98@gmail.com>)
Ответы Re: Could postgres12 support millions of sequences? (like 10 million)
Список pgsql-general
On 3/19/20 3:32 PM, pabloa98 wrote:
> 
> 
> On Thu, Mar 19, 2020 at 3:17 PM Rob Sargent <robjsargent@gmail.com 
> <mailto:robjsargent@gmail.com>> wrote:
> 
> 
> 
>>     On Mar 19, 2020, at 4:13 PM, pabloa98 <pabloa98@gmail.com
>>     <mailto:pabloa98@gmail.com>> wrote:
>>
>>
>>
>>     On Thu, Mar 19, 2020 at 2:50 PM Rob Sargent <robjsargent@gmail.com
>>     <mailto:robjsargent@gmail.com>> wrote:
>>
>>
>>
>>         > On Mar 19, 2020, at 3:36 PM, pabloa98 <pabloa98@gmail.com
>>         <mailto:pabloa98@gmail.com>> wrote:
>>         >
>>         > Hello,
>>         >
>>         > My schema requires a counter for each combination of 2
>>         values. Something like:
>>         >
>>         > CREATE TABLE counter(
>>         > group INT NOT NULL,
>>         > element INT NOT NULL,
>>         > seq_number INT NOT NULL default 0,
>>         > CONSTRAINT PRIMARY KEY (group, element)
>>         > );
>>         >
>>         > For each entry in counter, aka for each (group, element)
>>         pair, the model requires a seq_number.
>>         >
>>         > If I use a table "counter", I could still have counter
>>         collisions between 2 transactions. I need truly sequence
>>         behavior. Is that possible by using a table like "counter"
>>         table, where the counter could be increased out of the
>>         transaction so it performs as a sequence without having race
>>         conditions between concurrent transactions?
>>         >
>>         > The other option is to create sequences for each new pair of
>>         (group, element) using triggers. There are millions of pairs.
>>         So this approach will generate millions of sequences.
>>         >
>>         > How a PostgreSQL database would behave having millions of
>>         sequences in a schema? Would it degrade its performance? Is
>>         there any negative impact?
>>         >
>>         > Regards
>>         >
>>         > Pablo
>>         >
>>
>>         To clarify, are you hoping for consecutive numbers as the each
>>         row is added to the table, i.e. “serial”?
>>
>>         What is the intension of “seq_number”?
>>         >
>>         >
>>
>>
>>     the idea is to have like a serial sequence, but for each pair of
>>     (group, element).
>>
>>     so that when we insert rows in another table, we could have
>>     something like:
>>
>>
>>     group, element, event_id, ...
>>     1, 1, 1
>>     1, 1, 2
>>     1, 1, 3
>>     2, 1, 1
>>     1, 1, 4
>>     1, 3, 1
>>     1, 1, 5
>>     1, 3, 2
>>     2, 1, 2
>>     2, 1, 3
>>
>>     The 3rd column is the sequence number we get from the appropriate
>>     sequence created by the trigger.
>>     I want to implement a variation of
>>     https://stackoverflow.com/a/30204854 and that will generate
>>     millions of sequences.
>>
>>
>     Then I don’t thing group/element can be a PRIMARY KEY
> 
> 
> Thank you for your answer. Let me explain:
> 
> Table "counter" will have triggers that will create a sequence with a 
> new row is inserted called counter_1_1_seq, counter_2_1_seq, 
> counter_1_3_seq, etc that will be used to insert values in **another** 
> table. It will be used for nothing else. When we insert millions of 
> pairs group/element, the trigger in that table will generate millions of 
> sequences.

This has train wreck written all over it. If you are going to have a 
trigger for each combination of (group, element) I gotta believe the 
table will fall over before you run into sequence issues. Not sure why 
there can't be serial column that has supplies the sequence numbers and 
therefore only one sequence in play. From what I see all you care about 
is that:

group, element, event_id

has an increasing event_id for (group, element) pairs.


> 
> My question is how PostgreSQL will behave. Could it deal with millions 
> of sequences? What about system operations as vacuum, etc?
> 
> Pablo


-- 
Adrian Klaver
adrian.klaver@aklaver.com



В списке pgsql-general по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: PG12 autovac issues
Следующее
От: pabloa98
Дата:
Сообщение: Re: Could postgres12 support millions of sequences? (like 10 million)