Обсуждение: How should we design our tables and indexes

Поиск
Список
Период
Сортировка

How should we design our tables and indexes

От
veem v
Дата:
 Hello,
        We want to have the response time in <1 sec for our UI search query requirement. These will be pagination queries. These read queries will be on big transaction tables (will have ~500+ attributes approx will have approx. rows size of ~1KB) having a continuous stream of inserts consumed from source. And these tables will be a daily range partitioned on the processing_date column. Each partition is going to hold approx ~450million rows when it will serve in full capacity to all the customers transactions.
 
 The customer will be given the capability to search on a Max date range of 30 days of transaction data i.e ~30 range partitions and are supposed to get the query response back in <1 sec as it will be UI from which those queries will be submitted.

1)Is there any way in postgres to influence the optimizer for the "first_row" optimization, so that it won't go for evaluating all the rows from such UI search queries. As because these will be pagination queries and the user will be interested in seeing top 100 transactions in the first page asap?

 2) For e.g below is one sample query from a similar system but in a different database. Want to understand , what would be the appropriate indexes to make this above search query run in the quickest possible time?
 
  one Index on table1(MID) , one index Table1(CID), one index on table2(ACN_NBR)?
  OR
  Should we create a composite index here combining PR_ID i.e (PR_ID, MID), (PR_ID, CID), (PR_ID, ACN_NBR) as that is the most unique attribute here?
 
select  count(*) over() as total_record, *
from
    (select  .......
        from   TABLE1
            Left join schema1.TABLE2  on TABLE2.PR_ID = TABLE1.PR_ID  and TABLE2.MID = TABLE1.MID
and TABLE2.processing_date=TABLE1.processing_date
        where   TABLE2.processing_date between '2023-04-20' and '2023-05-21'-- Partition pruning
            and TABLE2.ACN_NBR = 'XXXX'
            and ( TABLE1.MID in (XXXXXX) OR TABLE1.CID in (XXXXXX))
        order by   TABLE1.PR_TIME DESC
    )
limit  100 offset 0;

The data pattern for the columns used in predicate are as below:-  Table1 will be the driving table.

count(distinct ACN_NBR) - 25million
count(distinct MID) - 223k
count(distinct CID) - 59k
count(*)from table1 and table2- ~350 million
PR_ID is a unique key.

3)One of the use cases is that the customer should be able to search on certain attributes and should be able to see the transactions in "desc by processing_date" i.e. latest transactions on the first page on the UI. And in such scenario, if the search attribute is less unique and the customer puts a date range of a month i.e. over 30 partitions , it may results in scanning and sorting billions of rows to get the top/recent ~100 transactions and most likely not going to respond back in <1 sec, even goes for the index. So how should we handle or design indexes for catering such queries?  For e.g. if we have the only filter on column "TABLE1.CID" in the above query, which is very less unique then how to handle it?
 
 4)And finally, the parameter "settings" which we have currently is as below for the current system. So I wanted to understand, if they are accurate or any oddity and we should change those to cater such requirements?
 
 For e.g. if we set "max_parallel_workers_per_gather"=4 to speed up the queries, then we will be able to serve only 32/4=8 concurrent user requests at any point in time. If we are targeting to serve ~100 concurrent users , will it be advisable to change or we should test the system with default i.e. not setting this parallel parameter?

Settings:
effective_cache_size = '176237472kB', maintenance_io_concurrency = '1', max_parallel_workers = '32', max_parallel_workers_per_gather = '4', search_path = 'public, public, "$user"', temp_buffers = '16MB', work_mem = '2GB', enable_partitionwise_join = 'on'

Regards
Veem

Re: How should we design our tables and indexes

От
Greg Sabino Mullane
Дата:
There is a lot to unpack here. I'm going to take a quick pass, but you ought to consider getting some custom expert help.

On Sat, Feb 10, 2024 at 2:39 PM veem v <veema0000@gmail.com> wrote:
... These will be pagination queries. These read queries will be on big transaction tables (will have ~500+ attributes approx will have approx. rows size of ~1KB) having a continuous stream of inserts

Pagination is already a hard problem, and does not even make sense when combined with "a continuous stream of inserts". What should the user see when they click on page 2?

Also, 500 attributes?! What data types are those? If boolean, you may want to look at using bitfields.

1)Is there any way in postgres to influence the optimizer for the "first_row" optimization, so that it won't go for evaluating all the rows from such UI search queries. As because these will be pagination queries and the user will be interested in seeing top 100 transactions in the first page asap?

Using LIMIT does allow for this, with certain caveats. The best answer for a lot of these questions is "try it and see" with a simplified version of your queries against some dummy data.
 
  one Index on table1(MID) , one index Table1(CID), one index on table2(ACN_NBR)?

This. One index for each important column. Especially if they vary between queries, as you alluded to later.

select  count(*) over() as total_record, * [large query omitted]

Queries starting with select count(*) are also a red flag, but it's hard to tell from this. Probably best to explain what you think the query is doing using regular words.

3)One of the use cases is that the customer should be able to search on certain attributes and should be able to see the transactions in "desc by processing_date" i.e. latest transactions on the first page on the UI. And in such scenario, if the search attribute is less unique and the customer puts a date range of a month i.e. over 30 partitions , it may results in scanning and sorting billions of rows to get the top/recent ~100 transactions and most likely not going to respond back in <1 sec, even goes for the index. So how should we handle or design indexes for catering such queries?  For e.g. if we have the only filter on column "TABLE1.CID" in the above query, which is very less unique then how to handle it?

Well, there is only so much a database can do if you are pulling from 500 different attributes. But the whole purpose of an indexed column is to prevent having to scan *all* the rows. If it's a common value, then hopefully there is something else in the where clause more specific that is also indexed. As mentioned above, a LIMIT and ORDER BY, with the appropriate indexes, can return early.

 For e.g. if we set "max_parallel_workers_per_gather"=4 to speed up the queries, then we will be able to serve only 32/4=8 concurrent user requests at any point in time. If we are targeting to serve ~100 concurrent users , will it be advisable to change or we should test the system with default i.e. not setting this parallel parameter?

Again, this is a TIAS (try it and see), but in general it's not the number of concurrent users, but the number of concurrent queries, which is a factor of how fast your queries are. Feed Postgres as many parallel workers as you can.

tl;dr Less than 1 second response time is *probably* reachable given your parameters, but parts of it are too vague to state for certain.

Cheers,
Greg

Re: How should we design our tables and indexes

От
veem v
Дата:
Thank you So much Greg. 

Will try to test the things as max as possible. I was trying to see basically, if any obvious things we should take care of before designing a system for satisfying such requirements. As you pointed few things , i am trying t answer those below

On Sun, 11 Feb 2024 at 10:43, Greg Sabino Mullane <htamfids@gmail.com> wrote:
There is a lot to unpack here. I'm going to take a quick pass, but you ought to consider getting some custom expert help.

On Sat, Feb 10, 2024 at 2:39 PM veem v <veema0000@gmail.com> wrote:

Pagination is already a hard problem, and does not even make sense when combined with "a continuous stream of inserts". What should the user see when they click on page 2?
 
When the user clicks to the second page , it will see the next set of rows i.e 100 to 200 and next will see 200 to 300 and so on till the result set finishes.
 
Also, 500 attributes?! What data types are those? If boolean, you may want to look at using bitfields.
Using LIMIT does allow for this, with certain caveats. The best answer for a lot of these questions is "try it and see" with a simplified version of your queries against some dummy data.
 

All those attributes are majorly Varchar and numeric in nature , so not sure if any options exist there for these?
Additionally, in other databases like Oracle we use hints like '/*+first_rows*/ to let optimizers favour index paths to favor these types of UI search queries , so I was wondering if anything like those exists over here.
 
  one Index on table1(MID) , one index Table1(CID), one index on table2(ACN_NBR)?

This. One index for each important column. Especially if they vary between queries, as you alluded to later.


If PR_ID is a must in the Join criteria between these table tables table1, table2 in all the queries, then is  it advisable to have a composite index like (pr_id, mid), (pr_id,cid) etc rather than having index on individual columns?
 
select  count(*) over() as total_record, * [large query omitted]

Queries starting with select count(*) are also a red flag, but it's hard to tell from this. Probably best to explain what you think the query is doing using regular words.


Actually this inner query is doing the main work, i.e finding the search results based on the input search criteria. The outer query is just fetching the results from the inner query along with count(*), to pass on to the API , so as to calculate and show the user how many pages there total with a full result set. basically it will count(*)/N records per page, and that figure will be displayed in the first page of the UI screen.
 
3)One of the use cases is that the customer should be able to search on certain attributes and should be able to see the transactions in "desc by processing_date" i.e. latest transactions on the first page on the UI. And in such scenario, if the search attribute is less unique and the customer puts a date range of a month i.e. over 30 partitions , it may results in scanning and sorting billions of rows to get the top/recent ~100 transactions and most likely not going to respond back in <1 sec, even goes for the index. So how should we handle or design indexes for catering such queries?  For e.g. if we have the only filter on column "TABLE1.CID" in the above query, which is very less unique then how to handle it?

Well, there is only so much a database can do if you are pulling from 500 different attributes. But the whole purpose of an indexed column is to prevent having to scan *all* the rows. If it's a common value, then hopefully there is something else in the where clause more specific that is also indexed. As mentioned above, a LIMIT and ORDER BY, with the appropriate indexes, can return early.


 So here in case , if there is just the filter on column CID and the join condition on PR_ID column between the two tables, No other filter exists to help make the result set smaller. Lets say the query really results in millions of rows. And it's because the CID column is having very less distinct values. Also the results are getting ordered on the PR_TIME column to show the latest transactions in the first page. So indexes on CID won't be very effective. So is there any option to index any other way or a different set of columns to make this type of query run faster (not sure if including the column which is there in order by clause 'pr_time' in the index will make a difference) ? Or any other way to store the results in a sorted way already in the table, so as to not do heavy sorting to show the first 100 rows to the customer out of millions satisfying the search criteria?

 
 For e.g. if we set "max_parallel_workers_per_gather"=4 to speed up the queries, then we will be able to serve only 32/4=8 concurrent user requests at any point in time. If we are targeting to serve ~100 concurrent users , will it be advisable to change or we should test the system with default i.e. not setting this parallel parameter?

Again, this is a TIAS (try it and see), but in general it's not the number of concurrent users, but the number of concurrent queries, which is a factor of how fast your queries are. Feed Postgres as many parallel workers as you can.


Sure will try to test and see how it behaves when the number of simultaneous queries (here 32/4=8 concurrent queries) exceed the max_parallel_workers limit. Though I am expecting the further queries exceeding the limit might get serialized.

Regards
Veem

Re: How should we design our tables and indexes

От
Michał Kłeczek
Дата:


On 10 Feb 2024, at 20:38, veem v <veema0000@gmail.com> wrote:

 Hello,
        We want to have the response time in <1 sec for our UI search query requirement. These will be pagination queries. These read queries will be on big transaction tables (will have ~500+ attributes approx will have approx. rows size of ~1KB) having a continuous stream of inserts consumed from source. And these tables will be a daily range partitioned on the processing_date column. Each partition is going to hold approx ~450million rows when it will serve in full capacity to all the customers transactions.
 

[snip]

You might want to take a look at https://www.postgresql.org/message-id/3FA1E0A9-8393-41F6-88BD-62EEEA1EC21F%40kleczek.org - it works quite well for us and maybe it is something you might want to test yourself.

Michal

Re: How should we design our tables and indexes

От
Karsten Hilbert
Дата:
Am Sun, Feb 11, 2024 at 12:53:10PM +0530 schrieb veem v:

> >> Pagination is already a hard problem, and does not even make sense when
> > combined with "a continuous stream of inserts". What should the user see
> > when they click on page 2?
> >
>
> When the user clicks to the second page , it will see the next set of rows
> i.e 100 to 200 and next will see 200 to 300 and so on till the result set
> finishes.

Given a continuous stream of inserts "second page" or "next
set of rows" is undefined -- if you aim for live data,
because interleaving data may have been inserting while the
user inspected the first batch of results.

A "second page" is only defined in terms of "what the original
query returned on the first run".

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B



Re: How should we design our tables and indexes

От
Greg Sabino Mullane
Дата:
When the user clicks to the second page , it will see the next set of rows i.e 100 to 200 and next will see 200 to 300 and so on till the result set finishes.

As others have pointed out, that still makes no sense. You will either fail to show certain rows completely, or have a stale view with additional tracking overhead.
 
All those attributes are majorly Varchar and numeric in nature , so not sure if any options exist there for these?

Nothing built in, but there are application-level tricks to combine fields, depending on what they are. Whether that would work for your use case or be an overall performance benefit is impossible to say without hard data. Normally, I'd say don't worry about that but - not to belabor the point - 500 columns is a lot, to the point where normal advice may not apply.

If PR_ID is a must in the Join criteria between these table tables table1, table2 in all the queries, then is  it advisable to have a composite index like (pr_id, mid), (pr_id,cid) etc rather than having index on individual columns?

No - individual indexes are better, and Postgres has no problem combining them when needed.
 
Actually this inner query is doing the main work, i.e finding the search results based on the input search criteria. The outer query is just fetching the results from the inner query along with count(*), to pass on to the API , so as to calculate and show the user how many pages there total with a full result set. basically it will count(*)/N records per page, and that figure will be displayed in the first page of the UI screen.

Okay, understood. But back to the pagination discussion, this number is relatively meaningless on a rapidly changing table.

Sure will try to test and see how it behaves when the number of simultaneous queries (here 32/4=8 concurrent queries) exceed the max_parallel_workers limit. Though I am expecting the further queries exceeding the limit might get serialized.

Yes - if there are not enough workers available, it will run with a reduced number of workers, including possibly zero. You can see that when you run an explain analyze, it will show you the number of workers it wants and the number if actually was able to get.

Cheers,
Greg

Re: How should we design our tables and indexes

От
veem v
Дата:
Thank You.

On Mon, 12 Feb 2024 at 22:17, Greg Sabino Mullane <htamfids@gmail.com> wrote:
Sure will try to test and see how it behaves when the number of simultaneous queries (here 32/4=8 concurrent queries) exceed the max_parallel_workers limit. Though I am expecting the further queries exceeding the limit might get serialized.

Yes - if there are not enough workers available, it will run with a reduced number of workers, including possibly zero. You can see that when you run an explain analyze, it will show you the number of workers it wants and the number if actually was able to get.


Thank you . Got the point.

If these quick queries ran within a certain time frame (say for a duration of ~1hr) and few of the executions ran longer(which might be because of the less parallel workers as the max limit exhausted because of concurrent executions, 
OR 
It may be because of the change in the execution path for certain execution of the queries. 

Is there any way to track those historical executions and be able to find the exact root cause of the slow executions confidently?

Regards
Veem

Re: How should we design our tables and indexes

От
Greg Sabino Mullane
Дата:
> Is there any way to track those historical executions and be able to find the exact root cause of the slow executions confidently?

https://www.postgresql.org/docs/current/auto-explain.html

auto_explain.log_min_duration = '5s' ## or large enough to capture your quickest one

Do NOT enable auto_explain.log_analyze.

Cheers,
Greg


Re: How should we design our tables and indexes

От
"Peter J. Holzer"
Дата:
On 2024-02-12 11:46:35 -0500, Greg Sabino Mullane wrote:
>     If PR_ID is a must in the Join criteria between these table tables table1,
>     table2 in all the queries, then is  it advisable to have a composite index
>     like (pr_id, mid), (pr_id,cid) etc rather than having index on individual
>     columns?
>
>
> No - individual indexes are better, and Postgres has no problem combining them
> when needed.

I'm a bit unsure if I should mention this as veem probably benefits more
from hard and simple rules than more nuanced answers, but that really
depends on the type of query.

For some kinds of queries a composite index can be dramatically faster.
While Postgres can combine indexes that means scanning both indexes and
combining the result, which may need a lot more disk I/O than scanning a
composite index. Indeed, in the cases where a composite index would be
useful but doesn't exist, PostgreSQL usually just chooses the best of
the single column indexes and ignores the rest.

That said, my rule of thumb is to create just single column indexes at
first and only create composite indexes if they are necessary.

        hp


--
   _  | Peter J. Holzer    | Story must make more sense than reality.
|_|_) |                    |
| |   | hjp@hjp.at         |    -- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |       challenge!"

Вложения

Re: How should we design our tables and indexes

От
veem v
Дата:

On Tue, 13 Feb 2024 at 20:59, Peter J. Holzer <hjp-pgsql@hjp.at> wrote:
For some kinds of queries a composite index can be dramatically faster.
While Postgres can combine indexes that means scanning both indexes and
combining the result, which may need a lot more disk I/O than scanning a
composite index. Indeed, in the cases where a composite index would be
useful but doesn't exist, PostgreSQL usually just chooses the best of
the single column indexes and ignores the rest.

That said, my rule of thumb is to create just single column indexes at
first and only create composite indexes if they are necessary.


Thank you so much. As I understand optimizer uses indexed column as "access criteria" and rest of the predicate as "filter criteria" while evaluating the query predicate. And if the majority of the rows are getting eliminated in the filtered step , that means adding that filtered criteria column to the index could give us better performance.

So  I was trying to understand say in below query having TABLE1 as driving table ( if we forget about column selectivity for a moment),

Can the optimizer, only scan the TABLE1  using ACCESS criteria " TABLE1.MID in (XXXX)" or "TABLE1.CID in (XXXX)" which will be catered by two different index i.e one index on column "MID" and other on column "CID"?
OR
It can utilize other columns as access criteria those used in join conditions like MID, PR_ID, in which case a composite index on  the columns(CID,PR_ID) (MID, PR_ID) will provide better selectivity and faster access?

Similarly for TABLE2 a composite index on (ACN_NBR,PR_ID,MID) or just an index on (ACN_NBR)?

select  .......
        from   TABLE1
            Left join schema1.TABLE2  on TABLE2.PR_ID = TABLE1.PR_ID  and TABLE2.MID = TABLE1.MID
and TABLE2.processing_date=TABLE1.processing_date
        where TABLE1.processing_date between '2023-04-20' and '2023-05-21'  -- Considering processing_date here as partition key.
   and TABLE2.ACN_NBR = 'XXXX'
            and ( TABLE1.MID in (XXXX) OR TABLE1.CID in (XXXX))
        order by   TABLE1.PR_TIME DESC

  

Re: How should we design our tables and indexes

От
Greg Sabino Mullane
Дата:
On Tue, Feb 13, 2024 at 2:26 PM veem v <veema0000@gmail.com> wrote:
Can the optimizer, only scan the TABLE1  using ACCESS criteria " TABLE1.MID in (XXXX)" or "TABLE1.CID in (XXXX)" which will be catered by two different index i.e one index on column "MID" and other on column "CID"?

Yes:

greg=# create table t1(pr_id int generated always as identity primary key, mid int, cid int);
CREATE TABLE
greg=# insert into t1(mid,cid) select random()*12345, random()*12345 from generate_series(1,123456);
INSERT 0 123456
greg=# create index t1_mid on t1(mid);
CREATE INDEX
greg=# create index t1_cid on t1(cid);
CREATE INDEX
greg=# analyze t1;
ANALYZE
greg=#  explain select * from t1 where mid in (1,2,3,4) and cid IN (5,6,7,8);
                                           QUERY PLAN
-------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on t1  (cost=50.03..109.55 rows=49 width=12)
   Recheck Cond: ((cid = ANY ('{5,6,7,8}'::integer[])) AND (mid = ANY ('{1,2,3,4}'::integer[])))
   ->  BitmapAnd  (cost=50.03..50.03 rows=49 width=0)
         ->  Bitmap Index Scan on t1_cid  (cost=0.00..24.88 rows=2469 width=0)
               Index Cond: (cid = ANY ('{5,6,7,8}'::integer[]))
         ->  Bitmap Index Scan on t1_mid  (cost=0.00..24.88 rows=2469 width=0)
               Index Cond: (mid = ANY ('{1,2,3,4}'::integer[]))

It can utilize other columns as access criteria those used in join conditions like MID, PR_ID, in which case a composite index on  the columns(CID,PR_ID) (MID, PR_ID) will provide better selectivity and faster access?

If you query on the primary key, it's going to use the associated PK index, not a composite one in which the PK is buried. But try creating the sample table t1 above yourself and play around with the various indexes and query combinations.

Cheers,
Greg