Обсуждение: Seeing high query planning time on Azure Postgres Single Server version 11.

Поиск
Список
Период
Сортировка

Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
Hi team,

We are seeing unusually high query planning times on our Postgres server. I am attaching a few query plans.

select upc from store_seller_products where upc in ('0001600015840','0001600015781','0001600015777','0001600015765','0001600015764','0001600015762','0001600015483','0001600015163','0001600015128','0001600014943','0001600014733','0001600014732','0001600014711','0001600014665','0001600014599','0001600014481','0001600013977','0001600013310','0001600012929','0001600012928','0001600012685','0001600012593','0001600012541','0001600012506','0001600012499','0001600012495','0001600012479','0001600012399','0001600012254','0001600012224','0001600012222','0001600012185','0001600012183','0001600012125','0001600011610','0001600010810','0001600010710','0001600010640','0001600010610','0001600010430','0001600010410','0001600010371','0001595898049','0001595370752','0001595370750','0001595370713','0001590023565','0001590000211','0001590000209','0001583909712') and store_id = '70500101' and pickup = true;

Index Only Scan using idx_store_seller_products_upc_store_id_delivery_p_tmp on store_seller_products  (cost=0.70..99.38 rows=8 width=14) (actual time=10.694..142.050 rows=91 loops=1)
  Index Cond: ((upc = ANY ('{0001600015840,0001600015781,0001600015777,0001600015765,0001600015764,0001600015762,0001600015483,0001600015163,0001600015128,0001600014943,0001600014733,0001600014732,0001600014711,0001600014665,0001600014599,0001600014481,0001600013977,0001600013310,0001600012929,0001600012928,0001600012685,0001600012593,0001600012541,0001600012506,0001600012499,0001600012495,0001600012479,0001600012399,0001600012254,0001600012224,0001600012222,0001600012185,0001600012183,0001600012125,0001600011610,0001600010810,0001600010710,0001600010640,0001600010610,0001600010430,0001600010410,0001600010371,0001595898049,0001595370752,0001595370750,0001595370713,0001590023565,0001590000211,0001590000209,0001583909712}'::text[])) AND (store_id = '70500891'::text))
  Heap Fetches: 91
  Buffers: shared hit=314 read=184
  I/O Timings: read=129.218
Planning Time: 24797.421 ms
Execution Time: 142.131 ms
explain (analyze, verbose, buffers) SELECT products_inventory_delta.upc FROM products_inventory_delta WHERE products_inventory_delta.modality = 'pickup' AND products_inventory_delta.store_id = '70300008' ORDER BY upc DESC LIMIT 51 OFFSET 0;


 Limit  (cost=0.57..54.37 rows=51 width=14) (actual time=27.676..27.831 rows=51 loops=1)
   Output: upc
   Buffers: shared hit=93
   ->  Index Only Scan Backward using products_inventory_delta_pkey on public.products_inventory_delta  (cost=0.57..30625.26 rows=29030 width=14) (actual time=27.674..27.824 rows=51 loops=1)
         Output: upc
         Index Cond: ((products_inventory_delta.store_id = '70300008'::text) AND (products_inventory_delta.modality = 'pickup'::modality))
         Heap Fetches: 50
         Buffers: shared hit=93
 Planning Time: 6142.094 ms
 Execution Time: 27.884 ms


explain (analyze, buffers) SELECT products_inventory_delta.upc FROM products_inventory_delta WHERE products_inventory_delta.modality = 'pickup' AND products_inventory_delta.store_id = '70300008' ORDER BY upc DESC LIMIT 51 OFFSET 0;

 Limit  (cost=0.57..50.96 rows=51 width=14) (actual time=12.290..12.442 rows=51 loops=1)
   Buffers: shared hit=93
   ->  Index Only Scan Backward using products_inventory_delta_pkey on products_inventory_delta  (cost=0.57..28164.01 rows=28502 width=14) (actual time=12.285..12.433 rows=51 loops=1)
         Index Cond: ((store_id = '70300008'::text) AND (modality = 'pickup'::modality))
         Heap Fetches: 53
         Buffers: shared hit=93
 Planning Time: 1165.382 ms
 Execution Time: 12.522 ms

Schema:

CREATE TABLE public.products_inventory_delta (
upc text NOT NULL,
store_id text NOT NULL,
modality public.modality NOT NULL,
updated_at timestamp NOT NULL,
CONSTRAINT products_inventory_delta_pkey PRIMARY KEY (store_id, modality, upc)
);
CREATE INDEX store_modality_updates ON public.products_inventory_delta USING btree (store_id, modality, updated_at);


CREATE TABLE public.store_seller_products (
id int8 NOT NULL GENERATED ALWAYS AS IDENTITY( INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START 1 CACHE 1 NO CYCLE),
upc text NULL,
store_id text NULL,
seller_id text NULL,
delivery bool NULL,
ship bool NULL,
instore bool NULL,
pickup bool NULL,
modality_changed_at timestamp NULL,
price_changed_at timestamp NULL,
national_price_changed_at timestamp NULL,
stock_level_changed_at timestamp NULL,
created_at timestamp NULL,
updated_at timestamp NULL,
product_core_info_updated_at timestamp NULL,
regional_price_changed_at timestamp NULL,
CONSTRAINT store_seller_products_pkey PRIMARY KEY (id)
);
CREATE INDEX idx_store_seller_products_store_id_instore_upc_p_instore_tmp ON public.store_seller_products USING btree (upc, store_id) INCLUDE (seller_id, modality_changed_at, price_changed_at, stock_level_changed_at, updated_at, product_core_info_updated_at) WHERE (instore = true);
CREATE INDEX idx_store_seller_products_store_id_pickup_upc_partial_wo_pickup ON public.store_seller_products USING btree (upc, store_id) INCLUDE (seller_id, modality_changed_at, price_changed_at, stock_level_changed_at, updated_at, product_core_info_updated_at) WHERE (pickup = true);
CREATE INDEX idx_store_seller_products_store_id_ship_upc_p_wo_ship ON public.store_seller_products USING btree (upc, store_id) INCLUDE (seller_id, modality_changed_at, price_changed_at, stock_level_changed_at, updated_at, product_core_info_updated_at) WHERE (ship = true);
CREATE INDEX idx_store_seller_products_store_id_upc ON public.store_seller_products USING btree (store_id, upc);
CREATE INDEX idx_store_seller_products_upc ON public.store_seller_products USING btree (upc DESC NULLS LAST);
CREATE INDEX idx_store_seller_products_upc_store_id_delivery_p_tmp ON public.store_seller_products USING btree (upc, store_id) INCLUDE (seller_id, modality_changed_at, price_changed_at, stock_level_changed_at, updated_at, product_core_info_updated_at) WHERE (delivery = true);




Thanks,
Hassan

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
Ron Johnson
Дата:
On Sat, Mar 9, 2024 at 7:18 AM hassan rafi <haassaan.khann@gmail.com> wrote:
Hi team,

We are seeing unusually high query planning times on our Postgres server. I am attaching a few query plans.

Postgresql version number?
Rows in the tables?
System load? 

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
Postgres version: PostgreSQL 11.18, compiled by Visual C++ build 1800, 64-bit
relname              |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
---------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
store_seller_products|16007942|843460096|       797033|r      |      16|false         |NULL      | 131980795904|


relname                 |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
------------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
products_inventory_delta| 2847202|259351648|      1606201|r      |       4|false         |NULL      |  23330758656|

Peak load (write): 3000 TPS (mostly updates).
Peak load (read): 800 TPS.


On Sat, Mar 9, 2024 at 5:58 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
On Sat, Mar 9, 2024 at 7:18 AM hassan rafi <haassaan.khann@gmail.com> wrote:
Hi team,

We are seeing unusually high query planning times on our Postgres server. I am attaching a few query plans.

Postgresql version number?
Rows in the tables?
System load? 

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
Robert Treat
Дата:
It'd be worth checking that your default_statistics_target isn't set
to anything wild, but beyond that, it'd be interesting to look at the
output of vacuum verbose on some of the system catalogs as istm you
might have catalog bloat.

I should also mention that you're running a non-longer-supported
version of Postgres (v11) and not even the latest release of said EOL
version. And if I am not mistaken, "Azure Postgres single server
version" is also deprecated, so you should really focus on getting
upgraded to something more modern.

Robert Treat
https://xzilla.net

On Sat, Mar 9, 2024 at 8:12 AM hassan rafi <haassaan.khann@gmail.com> wrote:
>
> Postgres version: PostgreSQL 11.18, compiled by Visual C++ build 1800, 64-bit
> relname              |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
> ---------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
> store_seller_products|16007942|843460096|       797033|r      |      16|false         |NULL      | 131980795904|
>
>
> relname                 |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
> ------------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
> products_inventory_delta| 2847202|259351648|      1606201|r      |       4|false         |NULL      |  23330758656|
>
> Peak load (write): 3000 TPS (mostly updates).
> Peak load (read): 800 TPS.
>
>
> On Sat, Mar 9, 2024 at 5:58 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
>>
>> On Sat, Mar 9, 2024 at 7:18 AM hassan rafi <haassaan.khann@gmail.com> wrote:
>>>
>>> Hi team,
>>>
>>> We are seeing unusually high query planning times on our Postgres server. I am attaching a few query plans.
>>
>>
>> Postgresql version number?
>> Rows in the tables?
>> System load?



Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
Sure, we will plan to upgrade to the latest version.

schemaname|relname                 |n_tup_ins|n_tup_upd |n_tup_del|n_live_tup|n_dead_tup|last_vacuum|last_autovacuum              |
----------+------------------------+---------+----------+---------+----------+----------+-----------+-----------------------------+
public    |store_seller_products   | 14865951|4584489857|472310200| 845718108|1982033749|           |2024-02-29 01:08:00.000 +0530|
public    |products_inventory_delta| 74889247|1214920129| 74703893| 253783107|  17060377|           |2024-03-09 12:53:00.000 +0530|

The value of default_statistics_target is set to 100.

The issue of high query planning time seems to intermittently resolve itself, only to reoccur after a few hours.

On Sat, Mar 9, 2024 at 7:57 PM Robert Treat <rob@xzilla.net> wrote:
It'd be worth checking that your default_statistics_target isn't set
to anything wild, but beyond that, it'd be interesting to look at the
output of vacuum verbose on some of the system catalogs as istm you
might have catalog bloat.

I should also mention that you're running a non-longer-supported
version of Postgres (v11) and not even the latest release of said EOL
version. And if I am not mistaken, "Azure Postgres single server
version" is also deprecated, so you should really focus on getting
upgraded to something more modern.

Robert Treat
https://xzilla.net

On Sat, Mar 9, 2024 at 8:12 AM hassan rafi <haassaan.khann@gmail.com> wrote:
>
> Postgres version: PostgreSQL 11.18, compiled by Visual C++ build 1800, 64-bit
> relname              |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
> ---------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
> store_seller_products|16007942|843460096|       797033|r      |      16|false         |NULL      | 131980795904|
>
>
> relname                 |relpages|reltuples|relallvisible|relkind|relnatts|relhassubclass|reloptions|pg_table_size|
> ------------------------+--------+---------+-------------+-------+--------+--------------+----------+-------------+
> products_inventory_delta| 2847202|259351648|      1606201|r      |       4|false         |NULL      |  23330758656|
>
> Peak load (write): 3000 TPS (mostly updates).
> Peak load (read): 800 TPS.
>
>
> On Sat, Mar 9, 2024 at 5:58 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
>>
>> On Sat, Mar 9, 2024 at 7:18 AM hassan rafi <haassaan.khann@gmail.com> wrote:
>>>
>>> Hi team,
>>>
>>> We are seeing unusually high query planning times on our Postgres server. I am attaching a few query plans.
>>
>>
>> Postgresql version number?
>> Rows in the tables?
>> System load?
hassan rafi <haassaan.khann@gmail.com> writes:
> The issue of high query planning time seems to intermittently resolve
> itself, only to reoccur after a few hours.

I wonder if you are running into the lack of this fix:

Author: Tom Lane <tgl@sss.pgh.pa.us>
Branch: master Release: REL_16_BR [9c6ad5eaa] 2022-11-22 14:40:20 -0500
Branch: REL_15_STABLE Release: REL_15_2 [2debceed2] 2022-11-22 14:40:44 -0500
Branch: REL_14_STABLE Release: REL_14_7 [bd06fe4de] 2022-11-22 14:40:45 -0500
Branch: REL_13_STABLE Release: REL_13_10 [6e639267a] 2022-11-22 14:40:45 -0500
Branch: REL_12_STABLE Release: REL_12_14 [ec10b6139] 2022-11-22 14:40:45 -0500
Branch: REL_11_STABLE Release: REL_11_19 [b96a096db] 2022-11-22 14:40:46 -0500

    YA attempt at taming worst-case behavior of get_actual_variable_range.

    We've made multiple attempts at preventing get_actual_variable_range
    from taking an unreasonable amount of time (3ca930fc3, fccebe421).
    But there's still an issue for the very first planning attempt after
    deletion of a large number of extremal-valued tuples.  While that
    planning attempt will set "killed" bits on the tuples it visits and
    thereby reduce effort for next time, there's still a lot of work it
    has to do to visit the heap and then set those bits.  It's (usually?)
    not worth it to do that much work at plan time to have a slightly
    better estimate, especially in a context like this where the table
    contents are known to be mutating rapidly.

    Therefore, let's bound the amount of work to be done by giving up
    after we've visited 100 heap pages.  Giving up just means we'll
    fall back on the extremal value recorded in pg_statistic, so it
    shouldn't mean that planner estimates suddenly become worthless.

    Note that this means we'll still gradually whittle down the problem
    by setting a few more index "killed" bits in each planning attempt;
    so eventually we'll reach a good state (barring further deletions),
    even in the absence of VACUUM.

    Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
    adjustments by me).  Back-patch to all supported branches.

    Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com

As noted, that did make it into the 11.x branch, but not till 11.19.

            regards, tom lane



Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
Thanks,

Would upgrading to the latest version of Postgres potentially solve the issue?

On Sat, Mar 9, 2024 at 11:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
hassan rafi <haassaan.khann@gmail.com> writes:
> The issue of high query planning time seems to intermittently resolve
> itself, only to reoccur after a few hours.

I wonder if you are running into the lack of this fix:

Author: Tom Lane <tgl@sss.pgh.pa.us>
Branch: master Release: REL_16_BR [9c6ad5eaa] 2022-11-22 14:40:20 -0500
Branch: REL_15_STABLE Release: REL_15_2 [2debceed2] 2022-11-22 14:40:44 -0500
Branch: REL_14_STABLE Release: REL_14_7 [bd06fe4de] 2022-11-22 14:40:45 -0500
Branch: REL_13_STABLE Release: REL_13_10 [6e639267a] 2022-11-22 14:40:45 -0500
Branch: REL_12_STABLE Release: REL_12_14 [ec10b6139] 2022-11-22 14:40:45 -0500
Branch: REL_11_STABLE Release: REL_11_19 [b96a096db] 2022-11-22 14:40:46 -0500

    YA attempt at taming worst-case behavior of get_actual_variable_range.

    We've made multiple attempts at preventing get_actual_variable_range
    from taking an unreasonable amount of time (3ca930fc3, fccebe421).
    But there's still an issue for the very first planning attempt after
    deletion of a large number of extremal-valued tuples.  While that
    planning attempt will set "killed" bits on the tuples it visits and
    thereby reduce effort for next time, there's still a lot of work it
    has to do to visit the heap and then set those bits.  It's (usually?)
    not worth it to do that much work at plan time to have a slightly
    better estimate, especially in a context like this where the table
    contents are known to be mutating rapidly.

    Therefore, let's bound the amount of work to be done by giving up
    after we've visited 100 heap pages.  Giving up just means we'll
    fall back on the extremal value recorded in pg_statistic, so it
    shouldn't mean that planner estimates suddenly become worthless.

    Note that this means we'll still gradually whittle down the problem
    by setting a few more index "killed" bits in each planning attempt;
    so eventually we'll reach a good state (barring further deletions),
    even in the absence of VACUUM.

    Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
    adjustments by me).  Back-patch to all supported branches.

    Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com

As noted, that did make it into the 11.x branch, but not till 11.19.

                        regards, tom lane

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
Greg Sabino Mullane
Дата:

On Sat, Mar 9, 2024 at 1:57 PM hassan rafi <haassaan.khann@gmail.com> wrote:
Would upgrading to the latest version of Postgres potentially solve the issue?

Potentially, yes, but the only one who can answer that for sure is you. Upgrade to 11.22 and re-run the query. Worst case scenario, it runs the same speed but you gained yourself a bunch of bugfixes and CVE resolutions. If the problem persists on 11.22, spin up a Postgres 16, load the data, and test it there.

Cheers,
Greg

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
Ron Johnson
Дата:
On Sun, Mar 10, 2024 at 1:34 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:

On Sat, Mar 9, 2024 at 1:57 PM hassan rafi <haassaan.khann@gmail.com> wrote:
Would upgrading to the latest version of Postgres potentially solve the issue?

Potentially, yes, but the only one who can answer that for sure is you. Upgrade to 11.22 and re-run the query. Worst case scenario, it runs the same speed but you gained yourself a bunch of bugfixes and CVE resolutions. If the problem persists on 11.22, spin up a Postgres 16, load the data, and test it there.

We have a similar situation with 9.6.24.  14.10 is noticeably faster (between 10% and 80%, depending on the query.

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
Thanks all. Will try upgrading the postgres version.

On Sun, Mar 10, 2024 at 11:44 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
On Sun, Mar 10, 2024 at 1:34 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:

On Sat, Mar 9, 2024 at 1:57 PM hassan rafi <haassaan.khann@gmail.com> wrote:
Would upgrading to the latest version of Postgres potentially solve the issue?

Potentially, yes, but the only one who can answer that for sure is you. Upgrade to 11.22 and re-run the query. Worst case scenario, it runs the same speed but you gained yourself a bunch of bugfixes and CVE resolutions. If the problem persists on 11.22, spin up a Postgres 16, load the data, and test it there.

We have a similar situation with 9.6.24.  14.10 is noticeably faster (between 10% and 80%, depending on the query.

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
hassan rafi
Дата:
We have migrated to postgres version 16.1, but still due to very high update activity on our DB, we are seeing elevated response times, though now the planning time is less.

catalog-v2=> explain (analyze, verbose, settings, buffers) SELECT products_inventory_delta.upc FROM products_inventory_delta WHERE products_inventory_delta.modality = 'pickup' AND products_inventory_delta.store_id = '70600372' ORDER BY upc DESC LIMIT 51 OFFSET 30000;

                                                                                            QUERY PLAN                                                                                            

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Limit  (cost=1450.68..1450.73 rows=1 width=14) (actual time=5049.115..5049.116 rows=0 loops=1)

   Output: upc

   Buffers: shared hit=33359 read=6590 dirtied=9379

   ->  Index Only Scan Backward using products_inventory_delta_pkey on public.products_inventory_delta  (cost=0.57..1450.68 rows=28606 width=14) (actual time=1.056..5047.472 rows=28299 loops=1)

         Output: upc

         Index Cond: ((products_inventory_delta.store_id = '70600372'::text) AND (products_inventory_delta.modality = 'pickup'::modality))

         Heap Fetches: 16840

         Buffers: shared hit=33359 read=6590 dirtied=9379

 Settings: effective_cache_size = '192GB', jit = 'off', random_page_cost = '2', work_mem = '2097151kB'

 Query Identifier: 220372279818787780

 Planning Time: 0.062 ms

 Execution Time: 5049.131 ms

Thanks,
Hassan


On Mon, Mar 11, 2024 at 12:00 PM hassan rafi <haassaan.khann@gmail.com> wrote:
Thanks all. Will try upgrading the postgres version.

On Sun, Mar 10, 2024 at 11:44 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
On Sun, Mar 10, 2024 at 1:34 PM Greg Sabino Mullane <htamfids@gmail.com> wrote:

On Sat, Mar 9, 2024 at 1:57 PM hassan rafi <haassaan.khann@gmail.com> wrote:
Would upgrading to the latest version of Postgres potentially solve the issue?

Potentially, yes, but the only one who can answer that for sure is you. Upgrade to 11.22 and re-run the query. Worst case scenario, it runs the same speed but you gained yourself a bunch of bugfixes and CVE resolutions. If the problem persists on 11.22, spin up a Postgres 16, load the data, and test it there.

We have a similar situation with 9.6.24.  14.10 is noticeably faster (between 10% and 80%, depending on the query.

Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
David Rowley
Дата:
On Fri, 15 Mar 2024 at 08:01, hassan rafi <haassaan.khann@gmail.com> wrote:
> We have migrated to postgres version 16.1, but still due to very high update activity on our DB, we are seeing
elevatedresponse times, though now the planning time is less.
 

>    Buffers: shared hit=33359 read=6590 dirtied=9379

>  Execution Time: 5049.131 ms

You could try "SET track_io_timing = 1;" to see how much of that time
is spent in reads.

David



Re: Seeing high query planning time on Azure Postgres Single Server version 11.

От
Laurenz Albe
Дата:
On Fri, 2024-03-15 at 00:31 +0530, hassan rafi wrote:
> We have migrated to postgres version 16.1, but still due to very high update
> activity on our DB, we are seeing elevated response times, though now the
> planning time is less.
>
> catalog-v2=> explain (analyze, verbose, settings, buffers) SELECT products_inventory_delta.upc FROM
products_inventory_deltaWHERE products_inventory_delta.modality = 'pickup' AND products_inventory_delta.store_id =
'70600372'ORDER BY upc DESC LIMIT 51 OFFSET 30000; 
>                                                                                             QUERY PLAN              
                                                                             
>
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  Limit  (cost=1450.68..1450.73 rows=1 width=14) (actual time=5049.115..5049.116 rows=0 loops=1)
>    Output: upc
>    Buffers: shared hit=33359 read=6590 dirtied=9379
>    ->  Index Only Scan Backward using products_inventory_delta_pkey on public.products_inventory_delta 
(cost=0.57..1450.68rows=28606 width=14) (actual time=1.056..5047.472 rows=28299 loops=1) 
>          Output: upc
>          Index Cond: ((products_inventory_delta.store_id = '70600372'::text) AND (products_inventory_delta.modality =
'pickup'::modality))
>          Heap Fetches: 16840
>          Buffers: shared hit=33359 read=6590 dirtied=9379
>  Settings: effective_cache_size = '192GB', jit = 'off', random_page_cost = '2', work_mem = '2097151kB'
>  Query Identifier: 220372279818787780
>  Planning Time: 0.062 ms
>  Execution Time: 5049.131 ms

Your problem are probably the "Heap Fetches: 16840".

If you VACUUM the table, the performance should improve.

The best solution is to make sure that autovacuum processes that table more often:

  ALTER TABLE public.products_inventory_delta SET (autovacuum_vacuum_scale_factor = 0.01);

Yours,
Laurenz Albe