Insert into on conflict, data size upto 3 billion records

Поиск
Список
Период
Сортировка
От Karthik Kumar Kondamudi
Тема Insert into on conflict, data size upto 3 billion records
Дата
Msg-id CAD-twtSfABMBH3ODxJiKdh6FHBtB0UuXn4mN-xwnC7tb=Cphjg@mail.gmail.com
обсуждение исходный текст
Ответы Re: Insert into on conflict, data size upto 3 billion records  (Ron <ronljohnsonjr@gmail.com>)
Список pgsql-general
Hi, 

I'm looking for suggestions on how I can improve the performance of the below merge statement, we have a batch process that batch load the data into the _batch tables using Postgres and the task is to update the main target tables if the record exists else into it, sometime these batch table could go up to 5 billion records. Here is the current scenario

target_table_main has 700,070,247  records and is hash partitioned into 50 chunks, it has an index on logical_ts and the batch table has 2,715,020,546 close to 3 billion records, so I'm dealing with a huge set of data so looking of doing this in the most efficient way.

Thank you

В списке pgsql-general по дате отправления:

Предыдущее
От: Noah Bergbauer
Дата:
Сообщение: Re: Preventing free space from being reused
Следующее
От: Christophe Pettus
Дата:
Сообщение: MultiXactMemberControlLock contention on a replica