Обсуждение: Best design for performance

Поиск
Список
Период
Сортировка

Best design for performance

От
"Riaan Stander"
Дата:

Good day

 

At my company we’re busy converting a product from using SQL Server to Postgres. One part of the old design involves filtering data for the rights a user has.

 

The SQL Server table looked like this:

CREATE TABLE [dbo].[usrUserRights]  (
     [UserId]   [dbo].[dm_Id] NOT NULL,
     [SiteId]   [dbo].[dm_Id] NOT NULL,
     [RightId]  [dbo].[dm_Id] NOT NULL,
     CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY CLUSTERED([UserId],[RightId],[SiteId])

);

All data in other tables would have a SiteId. Users would be assigned rights for certain Sites. We would then be able to filter data with a join.

Example:

SELECT Id, Code FROM SomeTable st

JOIN usrUserRights ur ON st.SiteId = ur.SiteId AND ur.UserId = @UserId AND ur.RightId = @TheRightRequired

 

The one design flaw with this is that the table gets extremely large. At our largest client this table contains over 700mil records. For a single user with lots of rights there could be 7mil records to cover their rights.

 

In Postgres I was thinking of going with a design like this

CREATE TABLE security.user_right_site
(
    user_id bigint NOT NULL,
    right_id bigint NOT NULL,
    sites bigint[]
);
create index on security.user_right_site(user_id, right_id);

 

This drastically cut down on the number of records in the table. It also seems to make a massive change to the storage requirements.

The old design requires 61GB vs 2.6GB.

 

My one concern is regarding the limitations of the array type in Postgres. Is there a point at which one should not use it? Currently our largest client has 6000+ sites, meaning that the array would contain that many items. What would the upper feasible limit be in Postgres?

 

Regarding queries to filter data against this table in Postgres. Any advice for the best method. I’ve done some testing myself, but just want to know if there are other alternatives.

 

Attempt 1, using Any (250ms)

select a.id, a.code, a.description from ara.asset a
    join security.user_right_site urs on urs.user_id = 1783 and urs.right_id = 10000 and a.site_id = any(urs.sites) 
where a.is_historical = true;

                                                          QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------

Nested Loop  (cost=1000.42..22712.71 rows=4 width=47)

   Join Filter: (a.site_id = ANY (urs.sites))

   ->  Gather  (cost=1000.00..22599.49 rows=4191 width=55)

         Workers Planned: 3

         ->  Parallel Seq Scan on asset a  (cost=0.00..21180.39 rows=1352 width=55)

               Filter: is_historical

   ->  Materialize  (cost=0.42..8.45 rows=1 width=530)

         ->  Index Scan using user_right_site_user_id_right_id_idx on user_right_site urs  (cost=0.42..8.45 rows=1 width=530)

               Index Cond: ((user_id = 1783) AND (right_id = 10000))

(9 rows)

 

Attempt 2, using CTE (65ms)

with sites as
(
    select unnest(sites) AS site_id from security.user_right_site where user_id = 1783 and right_id = 10000
)
select a.id, a.code, a.description from ara.asset a
join sites s on a.site_id = s.site_id
where a.is_historical = true;


                                                      QUERY PLAN

----------------------------------------------------------------------------------------------------------------------

Hash Join  (cost=1012.19..22628.68 rows=41 width=47)

   Hash Cond: (a.site_id = s.site_id)

   CTE sites

     ->  Index Scan using user_right_site_user_id_right_id_idx on user_right_site  (cost=0.42..8.94 rows=100 width=8)

           Index Cond: ((user_id = 1783) AND (right_id = 10000))

   ->  Gather  (cost=1000.00..22599.49 rows=4191 width=55)

         Workers Planned: 3

         ->  Parallel Seq Scan on asset a  (cost=0.00..21180.39 rows=1352 width=55)

               Filter: is_historical

   ->  Hash  (cost=2.00..2.00 rows=100 width=8)

         ->  CTE Scan on sites s  (cost=0.00..2.00 rows=100 width=8)

(11 rows)

 

Attempt 3, using sub select (65ms)

select a.id, a.code, a.description from
(select unnest(sites) AS site_id from security.user_right_site where user_id = 1783 and right_id = 10000) sites
join ara.asset a on sites.site_id = a.site_id
where a.is_historical = true;

 

                                                           QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------

Gather  (cost=1011.19..22209.86 rows=128 width=47)

   Workers Planned: 3

   ->  Hash Join  (cost=11.19..21197.06 rows=41 width=47)

         Hash Cond: (a.site_id = (unnest(user_right_site.sites)))

         ->  Parallel Seq Scan on asset a  (cost=0.00..21180.39 rows=1352 width=55)

               Filter: is_historical

         ->  Hash  (cost=9.94..9.94 rows=100 width=8)

               ->  Index Scan using user_right_site_user_id_right_id_idx on user_right_site  (cost=0.42..8.94 rows=100 width=8)

                     Index Cond: ((user_id = 1783) AND (right_id = 10000))

(9 rows)

 

 

Regards

Riaan Stander

Re: Best design for performance

От
Claudio Freire
Дата:
On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander <rstander@exa.co.za> wrote:
> In Postgres I was thinking of going with a design like this
>
> CREATE TABLE security.user_right_site
> (
>     user_id bigint NOT NULL,
>     right_id bigint NOT NULL,
>     sites bigint[]
> );
> create index on security.user_right_site(user_id, right_id);
>
>
>
> This drastically cut down on the number of records in the table. It also
> seems to make a massive change to the storage requirements.
>
> The old design requires 61GB vs 2.6GB.

How did you query the table's size? You're probably failing to account
for TOAST tables.

I'd suggest using pg_total_relation_size.

> My one concern is regarding the limitations of the array type in Postgres.
> Is there a point at which one should not use it? Currently our largest
> client has 6000+ sites, meaning that the array would contain that many
> items. What would the upper feasible limit be in Postgres?

In that design, rows with a large number of sites would probably end
up TOASTing the sites array.

That will make access to that array a tad slower, but it would
probably be OK for your use case, since you'll ever just read one such
row per query. You'll have to test to be sure.

The limit on that design is about 128M items on sites, IIRC (ie: the
maximum size of values is 1GB, so an array of 128M bigints is above
that limit). You'll probably have issues much earlier than that. For
instance, a filter of the form "site_id = ANY(sites)" with that many
entries would probably be unusably slow.

Personally, I would go for fetching the sites array on the application
side, and using site_id = ANY(ARRAY[...]) if small enough, and a
subselect if the array is too big. That would let the planner be
smarter, since it'll have the literal array list at planning time and
will be able to fetch accurate stats, and choose an optimal plan based
on data skew.


Re: Best design for performance

От
"Riaan Stander"
Дата:
I'm using the first query from here.
https://wiki.postgresql.org/wiki/Disk_Usage
It does seem to include toast data.

The plan is to do the rights checking in the application. The join solution gets used for reports to filter data &
clientadhoc queries. 

-----Original Message-----
From: Claudio Freire [mailto:klaussfreire@gmail.com]
Sent: 28 March 2017 02:42 AM
To: Riaan Stander <rstander@exa.co.za>
Cc: postgres performance list <pgsql-performance@postgresql.org>
Subject: Re: [PERFORM] Best design for performance

On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander <rstander@exa.co.za> wrote:
> In Postgres I was thinking of going with a design like this
>
> CREATE TABLE security.user_right_site
> (
>     user_id bigint NOT NULL,
>     right_id bigint NOT NULL,
>     sites bigint[]
> );
> create index on security.user_right_site(user_id, right_id);
>
>
>
> This drastically cut down on the number of records in the table. It
> also seems to make a massive change to the storage requirements.
>
> The old design requires 61GB vs 2.6GB.

How did you query the table's size? You're probably failing to account for TOAST tables.

I'd suggest using pg_total_relation_size.

> My one concern is regarding the limitations of the array type in Postgres.
> Is there a point at which one should not use it? Currently our largest
> client has 6000+ sites, meaning that the array would contain that many
> items. What would the upper feasible limit be in Postgres?

In that design, rows with a large number of sites would probably end up TOASTing the sites array.

That will make access to that array a tad slower, but it would probably be OK for your use case, since you'll ever just
readone such row per query. You'll have to test to be sure. 

The limit on that design is about 128M items on sites, IIRC (ie: the maximum size of values is 1GB, so an array of 128M
bigintsis above that limit). You'll probably have issues much earlier than that. For instance, a filter of the form
"site_id= ANY(sites)" with that many entries would probably be unusably slow. 

Personally, I would go for fetching the sites array on the application side, and using site_id = ANY(ARRAY[...]) if
smallenough, and a subselect if the array is too big. That would let the planner be smarter, since it'll have the
literalarray list at planning time and will be able to fetch accurate stats, and choose an optimal plan based on data
skew.




Re: Best design for performance

От
Claudio Freire
Дата:
> From: Claudio Freire [mailto:klaussfreire@gmail.com]
>
> How did you query the table's size? You're probably failing to account for TOAST tables.
>
> I'd suggest using pg_total_relation_size.
...
On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander <rstander@exa.co.za> wrote:
> I'm using the first query from here.
> https://wiki.postgresql.org/wiki/Disk_Usage

Please don't top post.

It's a surprisingly big difference. TOAST could be compressing the
array, but I wouldn't expect it to be that compressible. Do you have
any stats about the length of the site array per row?

> The plan is to do the rights checking in the application. The join solution gets used for reports to filter data &
clientadhoc queries. 

Especially for reporting queries, you want the planner's stats to be
as accurate as possible, and placing a literal sites arrays in the
query in my experience is the best way to achieve that. But that is
indeed limited to reasonably small arrays, thereby the need to have
both variants to adapt the query to each case.

If you can't afford to do that change at the application level, I
would expect that the original schema without the array should be
superior. The array hides useful information from the planner, and
that *should* hurt you.

You'll have to test with a reasonably large data set, resembling a
production data set as much as possible.


Re: Best design for performance

От
Riaan Stander
Дата:
On 28 Mar 2017 4:22 AM, Claudio Freire wrote:
From: Claudio Freire [mailto:klaussfreire@gmail.com]

How did you query the table's size? You're probably failing to account for TOAST tables.

I'd suggest using pg_total_relation_size.
...
On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander <rstander@exa.co.za> wrote:
I'm using the first query from here.
https://wiki.postgresql.org/wiki/Disk_Usage
Please don't top post.

It's a surprisingly big difference. TOAST could be compressing the
array, but I wouldn't expect it to be that compressible. Do you have
any stats about the length of the site array per row?

The plan is to do the rights checking in the application. The join solution gets used for reports to filter data & client adhoc queries.
Especially for reporting queries, you want the planner's stats to be
as accurate as possible, and placing a literal sites arrays in the
query in my experience is the best way to achieve that. But that is
indeed limited to reasonably small arrays, thereby the need to have
both variants to adapt the query to each case.

If you can't afford to do that change at the application level, I
would expect that the original schema without the array should be
superior. The array hides useful information from the planner, and
that *should* hurt you.

You'll have to test with a reasonably large data set, resembling a
production data set as much as possible.


I did some more testing on this. My primary concern that not all the data was there in the array version, but after doing some extensive testing all seems to be there.

I've done some comparisons vs the SQL Server version too.
SQL Sever Table with over 700mil records:

CREATE TABLE [dbo].[usrUserRights]  (
     [UserId]   [dbo].[dm_Id] NOT NULL,
     [SiteId]   [dbo].[dm_Id] NOT NULL,
     [RightId]  [dbo].[dm_Id] NOT NULL,
     CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY CLUSTERED([UserId],[RightId],[SiteId])

);

Takes
23GB for data and 200MB for indexes.

Postgres table with over 700mil records:

CREATE TABLE security.user_right_site2
(
    user_id bigint NOT NULL,
    right_id bigint NOT NULL,
    site_id bigint NOT NULL
);
create index on security.user_right_site2(user_id, right_id);

Takes 35GB data and 26GB index, for a total of 61GB.

That is quite a large increase over SQL Server storage. Am I missing something? Makes me worry about the rest of the database we still have to convert.

Postgres Array version ends up with only 600k records, due to aggregation:
CREATE TABLE security.user_right_site
(
    user_id bigint NOT NULL,
    right_id bigint NOT NULL,
    sites bigint[]
);
create index on security.user_right_site(user_id, right_id);

Takes 339Mb data, 25Mb index and 2240Mb TOAST

Regarding the Array length for each of these. They currently have max 6500 site ids.

Regards
Riaan

Re: Best design for performance

От
Claudio Freire
Дата:
On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander <rstander@exa.co.za> wrote:
> CREATE TABLE [dbo].[usrUserRights]  (
>      [UserId]   [dbo].[dm_Id] NOT NULL,
>      [SiteId]   [dbo].[dm_Id] NOT NULL,
>      [RightId]  [dbo].[dm_Id] NOT NULL,
>      CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
> CLUSTERED([UserId],[RightId],[SiteId])
> );
>
> Takes 23GB for data and 200MB for indexes.
>
> Postgres table with over 700mil records:
>
> CREATE TABLE security.user_right_site2
> (
>     user_id bigint NOT NULL,
>     right_id bigint NOT NULL,
>     site_id bigint NOT NULL
> );
> create index on security.user_right_site2(user_id, right_id);
>
> Takes 35GB data and 26GB index, for a total of 61GB.
>
> That is quite a large increase over SQL Server storage. Am I missing
> something? Makes me worry about the rest of the database we still have to
> convert.

Indexes are quite fat in postgres, especially if you index all
columns. To make the difference even bigger, it seems like there is
very hardcore compression going on in SQL Server, for that index to be
only 200MB. Are you sure you measured it correctly?

In any case, yes, indexes will be fatter in postgres. Their
performance shouldn't suffer considerably, though, given enough RAM.


Re: Best design for performance

От
Riaan Stander
Дата:
On 2017-03-28 07:15 PM, Claudio Freire wrote:
> On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander <rstander@exa.co.za> wrote:
>> CREATE TABLE [dbo].[usrUserRights]  (
>>       [UserId]   [dbo].[dm_Id] NOT NULL,
>>       [SiteId]   [dbo].[dm_Id] NOT NULL,
>>       [RightId]  [dbo].[dm_Id] NOT NULL,
>>       CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY
>> CLUSTERED([UserId],[RightId],[SiteId])
>> );
>>
>> Takes 23GB for data and 200MB for indexes.
>>
>> Postgres table with over 700mil records:
>>
>> CREATE TABLE security.user_right_site2
>> (
>>      user_id bigint NOT NULL,
>>      right_id bigint NOT NULL,
>>      site_id bigint NOT NULL
>> );
>> create index on security.user_right_site2(user_id, right_id);
>>
>> Takes 35GB data and 26GB index, for a total of 61GB.
>>
>> That is quite a large increase over SQL Server storage. Am I missing
>> something? Makes me worry about the rest of the database we still have to
>> convert.
> Indexes are quite fat in postgres, especially if you index all
> columns. To make the difference even bigger, it seems like there is
> very hardcore compression going on in SQL Server, for that index to be
> only 200MB. Are you sure you measured it correctly?
>
> In any case, yes, indexes will be fatter in postgres. Their
> performance shouldn't suffer considerably, though, given enough RAM.
>
>
That 200Mb is for another index on that table. Due to the table being
clustered on those 3 columns SQL Server sees the clustered index as the
table storage.