> S3 is not a database. You will need to be more specific about '... then > from the S3 it will be picked and gets merged to the target postgres > database.' > > > The data from S3 will be dumped into the stage table and then the > upsert/merge from that table to the actual table.
The S3 --> staging table would be helped by having the data as CSV and then using COPY. The staging --> final table step could be done as either ON CONFLICT or MERGE, you would need to test in your situation to verify which works better.
Just a thought , in case the delta record changes are really higher(say >30-40% of the total number of rows in the table) can OP also evaluate the "truncate target table +load target table" strategy here considering DDL/Trunc is transactional in postgres so can be done online without impacting the ongoing read queries and also performance wise, it would be faster as compared to the traditional Update/Insert/Upsert/Merge?