Обсуждение: Upsert performance considerations (~1 mil/hour)

Поиск
Список
Период
Сортировка

Upsert performance considerations (~1 mil/hour)

От
Fredrik Blomqvist
Дата:
Hi,

I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.

The table is fairly simple, just a few int/bool fields and a date field. The total size of the table is somewhere between 50-60 million records. Essentially, all the rows are supposed to stay up-to-date within a certain cycle (that is currently about 2 days). Sometimes a lot of information changes, sometimes very little/no information changes (e.g. 300 records get upserted but they are identical to the existing records).

So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the "failed" inserts from the upsert will still get written somewhere (according to our knowledge). Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.

The question is, are we right in our assumptions that the built-in upsert is useless from a performance perspective (e.g. it's only good for much smaller tasks) or are we wrong? I read a bit about HOT updates and autovacuum tuning, but nothing that references something more similar to this question.

Worth mentioning is that this DB (PostgreSQL 10.9) is running on Heroku so we are not able to tune it to our needs. We are planning to move at some point, depending on how important it ends up being. Finally, it is also worth mentioning that this DB also has one follower (i.e. read replica).

Would really appreciate some good insight on this! Thanks beforehand.

Best,
Fredrik

Re: Upsert performance considerations (~1 mil/hour)

От
Jeff Janes
Дата:
On Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <fredrik.blomqvist.95@gmail.com> wrote:
Hi,

I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.

How is that done?  300 single-valued insert statements, grouped into on transaction?  one 300-valued insert statement?
 
So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the "failed" inserts from the upsert will still get written somewhere (according to our knowledge).

You can suppress redundant updates with a trigger, as described https://www.postgresql.org/docs/current/functions-trigger.html.  This works even for updates that are the result of insert..on conflict..update.  There is still some writing, as each tuple does get locked, but it is much less (at least from a WAL perspective).   You can also put  a WHERE clause on the DO UPDATE so it only updates is a field has changed, but you have to list each field connected with OR.
 
Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.

It shouldn't put a meaningful additional read load on the database, as the ON CONFLICT code still needs to do the read as well.  Yes, it makes the code slightly more complex.

Cheers,

Jeff

Re: Upsert performance considerations (~1 mil/hour)

От
Fredrik Blomqvist
Дата:
Thanks for the response Jeff!

On Wed, Sep 4, 2019 at 3:58 PM Jeff Janes <jeff.janes@gmail.com> wrote:
On Wed, Sep 4, 2019 at 1:30 PM Fredrik Blomqvist <fredrik.blomqvist.95@gmail.com> wrote:
Hi,

I have tried doing some research for quite a while on the performance implications of the built-in upsert (INSERT ... ON CONFLICT UPDATE...) when a lot of upserts are made. The scale is something like 1 million records/hour, that is split up in groups of around 300 records each.

How is that done?  300 single-valued insert statements, grouped into on transaction?  one 300-valued insert statement?

It's done using one 300-valued insert.
 
 
So far, one hypothesis is that this project seems to be suffering from the large amount of writes that happen constantly since even if the upsert results in no inserts/updates, the "failed" inserts from the upsert will still get written somewhere (according to our knowledge).

You can suppress redundant updates with a trigger, as described https://www.postgresql.org/docs/current/functions-trigger.html.  This works even for updates that are the result of insert..on conflict..update.  There is still some writing, as each tuple does get locked, but it is much less (at least from a WAL perspective).   You can also put  a WHERE clause on the DO UPDATE so it only updates is a field has changed, but you have to list each field connected with OR.

Didn't know about the trigger method, handy. We were planning on utilizing the WHERE clause to prevent unnecessary updates, so I suppose that will make the situation slightly better. However, we are still left with the unnecessary insert, right? If all 300 values already exist and are up to date, there will be a failed insert that will have to be vacuumed, right? Which in turn means that we'd probably need to tune the auto vacuuming to a more aggressive setting if we want to use this kind of upsert. 
 
Therefore, the idea is to utilize old-fashioned upserts (writeable CTEs) and do more granular operations that can make sure to only insert data that doesn't already exist, and only update data that has actually changed. Naturally, however, this will put more read-load on the DB and increase query complexity.

It shouldn't put a meaningful additional read load on the database, as the ON CONFLICT code still needs to do the read as well.  Yes, it makes the code slightly more complex.

Right, okay. Based on what I have told you so far, would you recommend going with the old-fashioned upsert or the built-in one? Or is there some other key information that could swing that decision?

Best,
Fredrik