Re: Insert into on conflict, data size upto 3 billion records

Поиск
Список
Период
Сортировка
От Rob Sargent
Тема Re: Insert into on conflict, data size upto 3 billion records
Дата
Msg-id 6075918d-c07d-7a29-aecc-95e0b160033a@gmail.com
обсуждение исходный текст
Ответ на Re: Insert into on conflict, data size upto 3 billion records  (Karthik K <kar6308@gmail.com>)
Список pgsql-general

On 2/15/21 12:22 PM, Karthik K wrote:
> yes, I'm using \copy to load the batch table,
> 
> with the new design that we are doing, we expect updates to be less 
> going forward and more inserts, one of the target columns I'm updating 
> is indexed, so I will drop the index and try it out, also from your 
> suggestion above splitting the on conflict into insert and update is 
> performant but in order to split the record into batches( low, high) I 
> need to do a count of primary key on the batch tables to first split it 
> into batches
> 
> 
I don't think you need to do a count per se.  If you know the 
approximate range (or better, the min and max) in the incoming/batch 
data you can approximate the range values.



В списке pgsql-general по дате отправления:

Предыдущее
От: Loles
Дата:
Сообщение: Re: Replication sequence
Следующее
От: Thomas Guyot
Дата:
Сообщение: Re: How to post to this mailing list from a web based interface