Обсуждение: Why doesn't Postgres apply limit on groups when retrieving N results per group using WHERE IN + ORDER BY

Поиск
Список
Период
Сортировка
This is related to a question I asked on dbs.stackexchange.comhttps://dba.stackexchange.com/questions/335501/why-doesnt-postgres-apply-limit-on-groups-when-retrieving-n-results-per-group

But to reiterate - I have a query like this:

SELECT "orders".* 

FROM "orders" 

WHERE (user_id IN ?, ?, ?) 

ORDER BY "orders"."created_at" LIMIT 50


I have two indexes - `(user_id)` and `(user_id, created_at)`. Only the first index is ever used with this query.

If I query for a specific user_id, as expected it uses the composite index. If I use an inner select and trick the query planner into doing a nested loop for each user, it no longer uses the composite index.

I can understand that the query planner would not know ahead of time which users would have those 50 newest orders.

I imagined that it would be clever enough to determine that only 50 results are needed, and that it could use the `(user_id, created_at)` index to get 50 orders for each user. Then sort and filter those few hundred results in memory.

Instead what I'm seeing is that it gets all orders for each user using the `user_id` index and then sorts/filters them all in memory.

Here is an example query plan:

Limit  (cost=45271.94..45272.06 rows=50 width=57) (actual time=13.221..13.234 rows=50 loops=1)
  Buffers: shared hit=12321
  ->  Sort  (cost=45271.94..45302.75 rows=12326 width=57) (actual time=13.220..13.226 rows=50 loops=1)
          Sort Key: orders.created_at
          Sort Method: top-N heapsort Memory: 36kB
        Buffers: shared hit=12321
        ->  Bitmap Heap Scan on orders orders  (cost=180.85..44862.48 rows=12326 width=57) (actual time=3.268..11.485 rows=12300 loops=1)
                Recheck Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                Heap Blocks: exact=12300
              Buffers: shared hit=12321
              ->  Bitmap Index Scan on index_orders_on_user_id  (cost=0.00..177.77 rows=12326 width=0) (actual time=1.257..1.258 rows=12300 loops=1)
                      Index Cond: (orders.user_id = ANY ('{11,1000,3000}'::bigint[]))
                    Buffers: shared hit=21
Planning:
  Buffers: shared hit=6
Execution time: 13.263 ms

The table I'm querying has roughly 50,000,000 orders, with an even distribution of ~4000 orders per user.

I have found that I can speed this up significantly using CROSS JOIN LATERAL and it will use the composite index, but I'm struggling to understand WHY the CROSS JOIN LATERAL is needed here for it to use the index.

I've tried tweaking costs, disabling bitmap scans, etc, so it seems like this is a functional limitation rather than something to do with cost/statistics.  

So my question is twofold: 
- why doesn't Postgres use the composite index, and then retrieve only the minimum necessary amount of rows (50 per user) using the query I posted above?

 - If it is a functional limitation, is it lack of implementation, or is there a deeper incompatibility with how the query planner works that would prevent it from being able to do this?

Thanks!
On Mon, Feb 5, 2024 at 7:23 AM Sean v <sean@vanmulligen.ca> wrote:
This is related to a question I asked on dbs.stackexchange.comhttps://dba.stackexchange.com/questions/335501/why-doesnt-postgres-apply-limit-on-groups-when-retrieving-n-results-per-group

But to reiterate - I have a query like this:

SELECT "orders".* 

FROM "orders" 

WHERE (user_id IN ?, ?, ?) 

ORDER BY "orders"."created_at" LIMIT 50

[snip] 
So my question is twofold: 
- why doesn't Postgres use the composite index, and then retrieve only the minimum necessary amount of rows (50 per user) using the query I posted above?


But your query does not list the first 50 rows per user.  It only returns the first 50 rows of:

SELECT "orders".* 

FROM "orders" 

WHERE (user_id IN ?, ?, ?) 

ORDER BY "orders"."created_at"


Who knows which users are going to be in that list???

On Mon, Feb 5, 2024 at 8:55 AM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

Who knows which users are going to be in that list???


It doesn't matter.  Worse case scenario there is only one user in the result and so all 50 rows are their earliest 50 rows.  The system will thus never need more than the earliest 50 rows per user to answer this question.

David J.

Exactly. I'm really just trying to understand if there's some functional limitation to it being able to do that with how it executes these types of queries, or if its just an optimization that hasn't been built into the query planner yet.

I know I can get it to do precisely this if I use a CROSS JOIN LATERAL:

SELECT o.*
FROM company_users cu
CROSS JOIN LATERAL (   SELECT *   FROM orders o   WHERE o.user_id = company_users.user_id   ORDER  BY created_at DESC LIMIT 50   ) cu
WHERE  cu.company_id = ? 
ORDER BY created_at DESC LIMIT 50

That makes sense to me, it forces a nested loop and executes for each user. But doing a nested select like the query below doesn't use the index or limit the results to 50 per user - even though it does a nested loop just like the lateral join does:

SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (SELECT user_id FROM company_users WHERE company_id = ?)
ORDER BY "orders"."created_at" LIMIT 50


On 2024-02-05 7:58 a.m., David G. Johnston wrote:
On Mon, Feb 5, 2024 at 8:55 AM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

Who knows which users are going to be in that list???


It doesn't matter.  Worse case scenario there is only one user in the result and so all 50 rows are their earliest 50 rows.  The system will thus never need more than the earliest 50 rows per user to answer this question.

David J.



El mié, 7 feb 2024 8:07, Sean v <sean@vanmulligen.ca> escribió:

Exactly. I'm really just trying to understand if there's some functional limitation to it being able to do that with how it executes these types of queries, or if its just an optimization that hasn't been built into the query planner yet.

I know I can get it to do precisely this if I use a CROSS JOIN LATERAL:

SELECT o.*
FROM company_users cu
CROSS JOIN LATERAL (   SELECT *   FROM orders o   WHERE o.user_id = company_users.user_id   ORDER  BY created_at DESC LIMIT 50   ) cu
WHERE  cu.company_id = ? 
ORDER BY created_at DESC LIMIT 50

That makes sense to me, it forces a nested loop and executes for each user. But doing a nested select like the query below doesn't use the index or limit the results to 50 per user - even though it does a nested loop just like the lateral join does:

SELECT "orders".* 
FROM "orders" 
WHERE user_id IN (SELECT user_id FROM company_users WHERE company_id = ?)
ORDER BY "orders"."created_at" LIMIT 50

Joins will generally query the whole tables, leading to long run times. Have you tried to preselect the rows of interest with a "WITH ... SELECT ..." query to reduce the amount of data processed?


On 2024-02-05 7:58 a.m., David G. Johnston wrote:
On Mon, Feb 5, 2024 at 8:55 AM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

Who knows which users are going to be in that list???


It doesn't matter.  Worse case scenario there is only one user in the result and so all 50 rows are their earliest 50 rows.  The system will thus never need more than the earliest 50 rows per user to answer this question.

David J.

Cheers
Olivier

On Tue, 6 Feb 2024 at 01:23, Sean v <sean@vanmulligen.ca> wrote:
> SELECT "orders".*
> FROM "orders"
> WHERE (user_id IN ?, ?, ?)
> ORDER BY "orders"."created_at" LIMIT 50
>
> I have two indexes - `(user_id)` and `(user_id, created_at)`. Only the first index is ever used with this query.

> I imagined that it would be clever enough to determine that only 50 results are needed, and that it could use the
`(user_id,created_at)` index to get 50 orders for each user. Then sort and filter those few hundred results in memory.
 

It's as simple as the planner currently does not consider fetching 50
rows per user and doing a final sort before applying an overall LIMIT
50.

> I have found that I can speed this up significantly using CROSS JOIN LATERAL and it will use the composite index, but
I'mstruggling to understand WHY the CROSS JOIN LATERAL is needed here for it to use the index.
 

I'm afraid that's the best workaround until someone submits a patch to
have the planner consider doing this optimisation automatically.

> I've tried tweaking costs, disabling bitmap scans, etc, so it seems like this is a functional limitation rather than
somethingto do with cost/statistics.
 

No amount of that will get you the plan you want without the LATERAL JOIN query.

> So my question is twofold:
> - why doesn't Postgres use the composite index, and then retrieve only the minimum necessary amount of rows (50 per
user)using the query I posted above?
 
>
>  - If it is a functional limitation, is it lack of implementation, or is there a deeper incompatibility with how the
queryplanner works that would prevent it from being able to do this?
 

Likely it wouldn't be too difficult to make the planner consider this
optimisation. However, for it to be valid, the ORDER BY clause would
have to contain only columns from the column(s) on the left side of
the IN clause.  I think likely this could be done by having the
planner consider performing a Nested Loop with an outer VALUES scan
and an inner parameterized index scan with a Limit node above it as a
path for scanning the base relation. The tricky part would be
adjusting the planner so it didn't needlessly leave the IN clause in
the WHERE clause when the chosen plan is the Nested Loop with the
values scan.  The current planner data structures are not really
geared up for optional base quals right now.  Something would need to
be done to make that work and off the top of my head, I don't know
what that would be.

David