Re: Planner cost adjustments

Поиск
Список
Период
Сортировка
От Daniel Begin
Тема Re: Planner cost adjustments
Дата
Msg-id COL129-DS88CA2F8B760E6E3D8AFF394BC0@phx.gbl
обсуждение исходный текст
Ответ на Planner cost adjustments  (Daniel Begin <jfd553@hotmail.com>)
Ответы Re: Planner cost adjustments  (Francisco Olarte <folarte@peoplecall.com>)
Список pgsql-general
Hi Bill, you are right about not changing two variables at the same time.

I first increased the statistics target for problematic columns. It helps
but did not provide significant improvement. I then changed the
random_page_cost which really improved planner's choices about Seq/Index
scan.

Since I now get expected Seq/Index scan selection on most tables but larger
ones, I am not sure keeping increasing random_page_cost is a good idea. I
wonder if it may trigger index scan on smaller table when a Seq scan would
have been less expensive.

The remaining problem seems related to the statistics of some large tables.
On one hand, I might increase the statistic target for these tables to 500,
or even to 1000 and look at the results (but I have doubts it will help). On
the other hand, I could simply force enable_seqscan to OFF for queries
dealing with them.

Still not sure about the best solution but the problem is definitely
narrower :-)
Daniel


-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Bill Moran
Sent: June-11-15 06:51
To: Daniel Begin
Cc: pgsql-general@postgresql.org; 'Tomas Vondra'; 'Melvin Davidson'
Subject: Re: [GENERAL] Planner cost adjustments

On Wed, 10 Jun 2015 17:20:00 -0400
Daniel Begin <jfd553@hotmail.com> wrote:

> Here is a follow-up on adjusting the planner costs calculation
>
> -Statistics target of problematic columns were increased from 100 to 200.
> -Analyse was ran on all concerned tables (actually ran on the whole
> DB) -Random_page_cost was decreased from 4 to 2.

Since you changed two things, there's no way to be sure which change led to
the improvement. You seem to be assuming that changing the stastics target
was what helped. While that _may_ be right, it might also have been the
change to random_page_cost.

> As a result, about 80% of queries are now using what I would consider
> an optimal plan. However, the planner keeps using full table scans for
> large tables...
>
> For instance, I ran a query that should have used an index scan on two
> similar test tables. The planner had selected an index scan for the
> smaller one and a Seq Scan for larger one. Except for their sizes and
> for one field not used in the exercise, the test tables were identical
> and indexed on the same field. The smaller test table had 3.26E+10
> records and the larger one
> 3.78E+11 records.
>
> The query looked like...
> SELECT cs.user_id, cs.changeset_id, nd.id, nd.version FROM
> changesets_selection cs, a_test_table nd WHERE
> nd.changeset_id=cs.changeset_id;
>
> In order to understand why the planner selected the Seq Scan instead
> of an Index Scan on the large table (nodes), I ran an EXPLAIN ANALYSE
> (on warm
> cache) using enable_seqscan set to OFF/ON.
>
> -- QUERY PLAN on larger table (nodes) with SET enable_seqscan=OFF
> -----------------------------;  Nested Loop
> (cost=10000000000.58..10210480648.03 rows=194204592 width=40) (actual
> time=74.088..278008.579 rows=140112 loops=1)
>    ->  Seq Scan on changesets_selection cs
> (cost=10000000000.00..10000000110.44 rows=6644 width=24) (actual
> time=0.015..4.904 rows=6903 loops=1)
>    ->  Index Scan using nodes_changesetidndx on nodes nd
> (cost=0.58..31387.49 rows=29230 width=24) (actual time=12.655..40.259
> rows=20 loops=6903)
>          Index Cond: (changeset_id = cs.changeset_id)  Total runtime:
> 278026.196 ms
> (5 rows)
>
> -Completed after less than 5 minutes processing -I assume that
> cost=10000000000.00..10000000110.44 for the Seq Scan of
> changesets_selection is an artefact of setting enable_seqscan=OFF.
> -From what I see, the evaluation of rows number is still way off
> (1400X) for the large table, even if the statistics target was doubled to
200.
>
> -- QUERY PLAN on larger table (nodes) with SET enable_seqscan=ON
> ------------------------------;  Hash Join
> (cost=156171782.28..185673195.13 rows=194204592 width=40)
>    Hash Cond: (cs.changeset_id = n.changeset_id)
>    ->  Seq Scan on changesets_selection cs  (cost=0.00..110.44
> rows=6644
> width=24)
>    ->  Hash  (cost=84959952.68..84959952.68 rows=3878771968 width=24)
>          ->  Seq Scan on nodes nd  (cost=0.00..84959952.68
> rows=3878771968
> width=24)
> (5 rows)
>
> -Still running after 2:30 hours processing! That is why I did not
> provided the actual time and rows (however, actual rows are provided
> on first query
> plan)
> -Not surprisingly, the evaluation of rows number is way off again for
> the large table - same stats, same results...
>
> It seems there is a problem with my large table statistics, even after
> increase them to 200. Should I increase the statistic target to 500,
> or even to 1000?
> Is there something else I can trigger to get the appropriate plan?
>
> Comments/explanations would be appreciated Daniel
>
>
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Daniel Begin
> Sent: June-03-15 06:32
> To: 'Bill Moran'
> Cc: pgsql-general@postgresql.org; 'Tomas Vondra'; 'Melvin Davidson'
> Subject: Re: [GENERAL] Planner cost adjustments
>
> Thank Bill,
>
> About disks performance, all drives are identical and connected using
> USB3 connections and yes, I can tweak values and restart Postgres
> without any
> hardship!-)
> About seq_page_cost and random_page_cost, I am about to test different
> lower values as you and Thomas propose.
> Raising the statistics target is a good idea. Since most of the data
> have a power law distribution it might affect the statistics.
>
> I will do as suggested and get back to the community for further
> comments. I wished to follow the procedure proposed by PT, just in
> case I eventually had to on step 4 (contact PostgreSQL developers so
> they can improve the planner).
>
> And I am still open to other proposal
> Daniel
>
> -----Original Message-----
> From: pgsql-general-owner@postgresql.org
> [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Bill Moran
> Sent: June-02-15 23:31
> To: Daniel Begin
> Cc: pgsql-general@postgresql.org; 'Tomas Vondra'; 'Melvin Davidson'
> Subject: Re: [GENERAL] Planner cost adjustments
>
> On Tue, 2 Jun 2015 14:01:35 -0400
> Daniel Begin <jfd553@hotmail.com> wrote:
>
> > Here is a follow-up on the step-by-step procedure proposed by PT
> >
> > #1 - setup postgresql planner's cost estimate settings for my hardware.
> > --------------------------------------------------------------------
> > --
> > ------
> > ----------
> > Current parameters values described in section 18.7.2 haven't been
> > changed except for the effective_cache_size seq_page_cost = 1
> > random_page_cost = 4 cpu_tuple_cost = 0.01 cpu_index_tuple_cost =
> > 0.005 cpu_operator_cost = 0.0025 effective_cache_size = 10GB
> >
> > I did a bunch of tests on frequently used queries to see how well
> > they perform - using SET enable_seqscan = ON/OFF.
> > As described earlier in this tread, the planner use Seq Scan on
> > tables even if using an Index Scan is in this case 5 times faster!
> > Here are the logs of EXPLAIN ANALYSE on a query...
>
> In an earlier message you mention that the drives are external to the
> computer. I don't remember details, but external drives can be quite
> dicey as far as performance goes, depending on the specific technlogy (USB
vs.
> external SATA vs. NAS, for example) as well as some pretty wild
> variances between different brands of the same technology.
> See:
> http://www.databasesoup.com/2012/05/random-page-cost-revisited.html
>
> As a result, I'm suspicious that the default values you're using for
> random_page_cost and seq_page_cost are throwing things off becuase
> your disks aren't performing like internally connected disks.
>
> Correct me if I'm wrong on any of the assumptions I'm making here, but
> I got the impression that you can tweak values and restart Postgres
> without any hardship. If that's the case, I'm guessing that raising
> seq_page_cost (possible to 2) will cause Postgres to make better
> decisions about what are good plans. My suggestion is to try some
> different values for those two settings, doing several tests after
> each change, and see if you can find a set of values that starts
> getting you good plans. It appears that the planner thinks that it can
> get better performance by reading from the disk in sequence than by
> picking random pages, and that makes me think that the difference
> between seq_page_cost and random_page_cost is bigger than the actual
behavior of the drives.
>
> More comments below.
>
> > osmdump=# SET enable_seqscan = ON;
> > osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed
> > FROM changesets WHERE changesets.user_id IN(SELECT id FROM
> > sample.users);
> > --------------------------------------------------------------------
> > --
> > ------
> > -----------------------------------------------------
> >  Hash Semi Join  (cost=21.50..819505.27 rows=726722 width=24)
> > (actual
> > time=1574.914..7444.938 rows=338568 loops=1)
> >    Hash Cond: (changesets.user_id = users.id)
> >    ->  Seq Scan on changesets  (cost=0.00..745407.22 rows=25139722
> > width=24) (actual time=0.002..4724.578 rows=25133929 loops=1)
> >    ->  Hash  (cost=14.00..14.00 rows=600 width=8) (actual
> > time=0.165..0.165
> > rows=600 loops=1)
> >          Buckets: 1024  Batches: 1  Memory Usage: 24kB
> >          ->  Seq Scan on users  (cost=0.00..14.00 rows=600 width=8)
> > (actual
> > time=0.003..0.073 rows=600 loops=1)
> >  Total runtime: 7658.715 ms
> > (7 rows)
> >
> > osmdump=# SET enable_seqscan = OFF;
> > osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed
> > FROM changesets WHERE changesets.user_id IN(SELECT id FROM
> > sample.users);
> > --------------------------------------------------------------------
> > --
> > ------
> > --------------------------------------------------------------------
> >  Nested Loop  (cost=10000000015.94..10001072613.45 rows=726722
> > width=24) (actual time=0.268..1490.515 rows=338568 loops=1)
> >    ->  HashAggregate  (cost=10000000015.50..10000000021.50 rows=600
> > width=8) (actual time=0.207..0.531 rows=600 loops=1)
> >          ->  Seq Scan on users  (cost=10000000000.00..10000000014.00
> > rows=600 width=8) (actual time=0.003..0.037 rows=600 loops=1)
> >    ->  Index Scan using changesets_useridndx on changesets
> > (cost=0.44..1775.54 rows=1211 width=24) (actual time=0.038..2.357
> > rows=564
> > loops=600
> >          Index Cond: (user_id = users.id)  Total runtime: 1715.517
> > ms
> > (6 rows)
> >
> >
> > #2 - Run ANALYZE DATABASE and look at performance/planning improvement.
> > --------------------------------------------------------------------
> > --
> > ------
> > ----------
> > I ran ANALYZE DATABASE then rerun the query. It did not produce any
> > significant improvement according to the EXPLAIN ANALYSE below...
> >
> > osmdump=# SET enable_seqscan = ON;
> > osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed
> > FROM changesets WHERE changesets.user_id IN(SELECT id FROM
> > sample.users);
> > --------------------------------------------------------------------
> > --
> > ------
> > -----------------------------------------------------
> >  Hash Semi Join  (cost=21.50..819511.42 rows=729133 width=24)
> > (actual
> > time=1538.100..7307.743 rows=338568 loops=1)
> >    Hash Cond: (changesets.user_id = users.id)
> >    ->  Seq Scan on changesets  (cost=0.00..745390.84 rows=25138084
> > width=24) (actual time=0.027..4620.691 rows=25133929 loops=1)
> >    ->  Hash  (cost=14.00..14.00 rows=600 width=8) (actual
> > time=0.300..0.300
> > rows=600 loops=1)
> >          Buckets: 1024  Batches: 1  Memory Usage: 24kB
> >          ->  Seq Scan on users  (cost=0.00..14.00 rows=600 width=8)
> > (actual
> > time=0.022..0.187 rows=600 loops=1)
> >  Total runtime: 7519.254 ms
> > (7 rows)
> >
> > osmdump=# SET enable_seqscan = OFF;
> > osmdump=# EXPLAIN ANALYSE SELECT user_id, id AS changeset_id,closed
> > FROM changesets WHERE changesets.user_id IN(SELECT id FROM
> > sample.users);
> > --------------------------------------------------------------------
> > --
> > ------
> > --------------------------------------------------------------------
> >  Nested Loop  (cost=10000000015.94..10001090810.49 rows=729133
> > width=24) (actual time=0.268..1466.248 rows=338568 loops=1)
> >    ->  HashAggregate  (cost=10000000015.50..10000000021.50 rows=600
> > width=8) (actual time=0.205..0.530 rows=600 loops=1)
> >          ->  Seq Scan on users  (cost=10000000000.00..10000000014.00
> > rows=600 width=8) (actual time=0.003..0.035 rows=600 loops=1)
> >    ->  Index Scan using changesets_useridndx on changesets
> > (cost=0.44..1805.83 rows=1215 width=24) (actual time=0.036..2.314
> > rows=564
> > loops=600)
> >          Index Cond: (user_id = users.id)  Total runtime: 1677.447
> > ms
> > (6 rows)
> >
> > #3 - Run EXPLAIN ANALYZE and look for discrepancies between the
> > estimated and actual times
> > --------------------------------------------------------------------
> > --
> > ------
> > ----------
> > Looking at above results, there are obvious discrepancies between
> > expected/actual rows and time!
> > I dug a bit by exploring/trying to understand the different concepts
> > explained in...
> >
> > http://www.postgresql.org/docs/9.4/static/planner-stats.html
> > http://www.postgresql.org/docs/8.1/static/planner-stats-details.html
> > http://www.postgresql.org/docs/9.2/static/view-pg-stats.html
> >
> > Concerning discrepancies between the actual number of rows and
> > predicted value, I looked at what pg_stats was saying about user_id
> > in table changesets.
> > Here are the values provided to the planner...
> > Average_width=8
> > histogram_bounds: the size of the bins varies between 50 and 150000,
> > which make sense because if I had divided the column's values into
> > groups of approximately equal population, I would have produced bins
> > between 1 and
> > 100000 (if sorted by frequency)
> > n_distinct= 20686 (there is actually 464858 distinct values for
> > user_id in the table)
> > most_common_vals: values make sense (I checked the frequency count
> > of a couple most common users_id)
> > correlation=0.617782 (?)
> > most_common_elems, most_common_elem_freqs and elem_count_histogram
> > were empty
> >
> > At this point, I wonder if the assumptions behind the planner's
> > statistics may produce such problems since the distribution of my
> > data is not uniform but follows a power law (some user_id would
> > return millions of records while others only one).
> > This is the farthest I can go at this point. Maybe someone can
> > provide me with more explanations regarding planner's behavior and
> > ways to go further to make it work properly?
>
> You may also benefit from increasing the statistics targets and
> running ANALYZE again. It certainly looks like some of those stats are
> pretty far off. Raising the statistics target will cause ANALYZE to
> investigate more rows (which takes longer but might produce more
> accurate results)
>
> I suggest experimenting with the cost settings first, though.
>
> --
> Bill Moran <wmoran@potentialtech.com>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
> make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
> make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


--
Bill Moran <wmoran@potentialtech.com>


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



В списке pgsql-general по дате отправления:

Предыдущее
От: Bill Moran
Дата:
Сообщение: Re: Planner cost adjustments
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Prevent roles not having admin option from granting themselves to other roles