Обсуждение: Substantial different index use between 9.5 and 9.6

Поиск
Список
Период
Сортировка

Substantial different index use between 9.5 and 9.6

От
Bill Measday
Дата:
Substantial different index use between 9.5 and 9.6
Postgres versions 9.5 and 9.6 running on Windows Server 2012. Installed
using EnterpriseDB. Both instances are on the same server,
postgresql.conf for both are the same except max_locks_per_transaction =
200 in 9.6 (caused insertion errors otherwise). On 9.6, Postgis is
2.3.0, and I think same on 9.5 but not sure how to tell.

Databases on the 2 instances are the same (as far as I can tell).

I have 2 relevant tables (created using same script in both instances).
One contains a geometry column (geom geometry(1107464) - a polygon)
with  gist index.  This table has around 10 billion records. The disks
these databases on aren't particularly fast, and indexing took about a week.
Second table has latitude (numeric(10, 8)), and longitude (numeric(11,
8)) and about 10 million records.

The query I'm running is (a part of an insertion into a new table I was
trying to run)
     SELECT address_default_geocode_pid,
     (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom,
ST_SetSRID(ST_MakePoint(longitude, latitude), 4326))),
     ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)
      FROM address_default_geocode;

Under 9.5 the insertion takes about 11 hours. I gave up on 9.6.

I thought I'd try just one record, so:

SELECT address_default_geocode_pid,
     (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom,
ST_SetSRID(ST_MakePoint(longitude, latitude), 4326))),
     ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)
      FROM address_default_geocode
      WHERE latitude = -33.87718472 AND longitude = 151.27544336;

This returns 3 rows (which is more than the average I'd expect BTW). On
9.5 takes a few seconds (3-5) and again I gave up on 9.6

Looking just at the query shown above, I noted a difference in explained
behaviour.  Here is the output from 9.5:

     QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Seq Scan on address_default_geocode  (cost=0.00..37760293.94 rows=1
width=25)
    Filter: ((latitude = '-33.87718472'::numeric) AND (longitude =
151.27544336))
    SubPlan 1
      ->  Bitmap Heap Scan on m_elevations e
(cost=282802.21..37401439.43 rows=3512160 width=8)
            Recheck Cond: (geom ~
st_setsrid(st_makepoint((address_default_geocode.longitude)::double
precision, (address_default_geocode.latitude)::double precision), 4326))
            Filter: _st_contains(geom,
st_setsrid(st_makepoint((address_default_geocode.longitude)::double
precision, (address_default_geocode.latitude)::double precision), 4326))
            ->  Bitmap Index Scan on m_elevations_geom_idx
(cost=0.00..281924.17 rows=10536480 width=0)
                  Index Cond: (geom ~
st_setsrid(st_makepoint((address_default_geocode.longitude)::double
precision, (address_default_geocode.latitude)::double precision), 4326))
(8 rows)

 From 9.6

     QUERY PLAN


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  Seq Scan on address_default_geocode
(cost=10000000000.00..23297309357.08 rows=1 width=49)
    Filter: ((latitude = '-33.87718472'::numeric) AND (longitude =
151.27544336))
    SubPlan 1
      ->  Seq Scan on m_elevations e
(cost=10000000000.00..13296950520.12 rows=3512159563 width=8)
            Filter: st_contains(geom,
st_setsrid(st_makepoint((address_default_geocode.longitude)::double
precision, (address_default_geocode.latitude)::double precision), 4326))
(5 rows)

Interestingly (change is hard coding of coordinates in second line):

explain SELECT address_default_geocode_pid,
     (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom,
ST_SetSRID(ST_MakePoint(151.27544336, -33.87718472), 4326))),
     ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)
      FROM address_default_geocode
      WHERE latitude = -33.87718472 AND longitude = 151.27544336;

Gives (in 9.6)

                                                QUERY PLAN

--------------------------------------------------------------------------------------------------------
  Seq Scan on address_default_geocode
(cost=10037428497.36..10037787334.33 rows=1 width=49)
    Filter: ((latitude = '-33.87718472'::numeric) AND (longitude =
151.27544336))
    InitPlan 1 (returns $0)
      ->  Bitmap Heap Scan on m_elevations e
(cost=272194.20..37428497.36 rows=3512160 width=8)
            Recheck Cond: (geom ~
'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)
            Filter: _st_contains(geom,
'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)
            ->  Bitmap Index Scan on m_elevations_geom_idx
(cost=0.00..271316.16 rows=10536480 width=0)
                  Index Cond: (geom ~
'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)
(8 rows)

Which looks better.

So for some reason, 9.6 planner decides not to use the index for a small
number of records returned from address_default_geocode.
I have vacuum analysed both tables.
Clearly a sequential scan on 10 billion records is pretty slow (to say
the least).

Has anyone seen anything like this/got any thoughts?

I tried "set enable_seqscan=false" but didn't seem to have any effect.

Regards

Bill



Re: Substantial different index use between 9.5 and 9.6

От
Tom Lane
Дата:
Bill Measday <bill@measday.com> writes:
> Substantial different index use between 9.5 and 9.6

Maybe you missed an ANALYZE after migrating?  The plan difference
seems to be due to a vast difference in rowcount estimate for the
m_elevations condition:

>       ->  Bitmap Heap Scan on m_elevations e
> (cost=282802.21..37401439.43 rows=3512160 width=8)

>       ->  Seq Scan on m_elevations e
> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)

If you don't know where that factor-of-1000 came from, maybe take
it up with the postgis folk.  It'd mostly be coming out of their
selectivity estimation routines.

            regards, tom lane


Re: Substantial different index use between 9.5 and 9.6

От
Bill Measday
Дата:
Thanks Tom.

First, this wasn't a migration but new db loaded from scratch (if that
matters).

As per the end of the original post "I have vacuum analysed both
tables".  I assume this is what you meant?

My gut feel was that it isn't a postgis issue since the third example I
gave uses the index, but I will take it up with them too.

Rgds


Bill

On 2/12/2016 10:48 AM, Tom Lane wrote:
> Bill Measday <bill@measday.com> writes:
>> Substantial different index use between 9.5 and 9.6
> Maybe you missed an ANALYZE after migrating?  The plan difference
> seems to be due to a vast difference in rowcount estimate for the
> m_elevations condition:
>
>>        ->  Bitmap Heap Scan on m_elevations e
>> (cost=282802.21..37401439.43 rows=3512160 width=8)
>>        ->  Seq Scan on m_elevations e
>> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)
> If you don't know where that factor-of-1000 came from, maybe take
> it up with the postgis folk.  It'd mostly be coming out of their
> selectivity estimation routines.
>
>             regards, tom lane



Re: Substantial different index use between 9.5 and 9.6

От
Daniel Blanch Bataller
Дата:
ANALYZE takes samples at random, so statistics might be different even with same postgresql version:


For large tables, ANALYZE takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time ANALYZE is run, even if the actual table contents did not change. This might result in small changes in the planner's estimated costs shown by EXPLAIN. In rare situations, this non-determinism will cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of statistics collected by ANALYZE, as described below.

Though, having that round (x 1000) difference, my bet is that you have different statistics target whether on database, table or columns, see:

The extent of analysis can be controlled by adjusting the default_statistics_target configuration variable, or on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS (see ALTER TABLE). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value is 100, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are never used as part of the WHEREGROUP BY, or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns.

Here is some help on how to see statistics per column:


Check if this is the case.





El 2 dic 2016, a las 1:26, Bill Measday <bill@measday.com> escribió:

Thanks Tom.

First, this wasn't a migration but new db loaded from scratch (if that matters).

As per the end of the original post "I have vacuum analysed both tables".  I assume this is what you meant?

My gut feel was that it isn't a postgis issue since the third example I gave uses the index, but I will take it up with them too.

Rgds


Bill

On 2/12/2016 10:48 AM, Tom Lane wrote:
Bill Measday <bill@measday.com> writes:
Substantial different index use between 9.5 and 9.6
Maybe you missed an ANALYZE after migrating?  The plan difference
seems to be due to a vast difference in rowcount estimate for the
m_elevations condition:

      ->  Bitmap Heap Scan on m_elevations e
(cost=282802.21..37401439.43 rows=3512160 width=8)
      ->  Seq Scan on m_elevations e
(cost=10000000000.00..13296950520.12 rows=3512159563 width=8)
If you don't know where that factor-of-1000 came from, maybe take
it up with the postgis folk.  It'd mostly be coming out of their
selectivity estimation routines.

regards, tom lane



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: Substantial different index use between 9.5 and 9.6

От
Bill Measday
Дата:

Seems to be a replicable issue in PostGis - ticket raised at their end, so I'll wait for a resolution of the root cause.


Thanks for your help/thoughts.


Rgds


Bill

On 3/12/2016 2:41 AM, Daniel Blanch Bataller wrote:
ANALYZE takes samples at random, so statistics might be different even with same postgresql version:


For large tables, ANALYZE takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time ANALYZE is run, even if the actual table contents did not change. This might result in small changes in the planner's estimated costs shown by EXPLAIN. In rare situations, this non-determinism will cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of statistics collected by ANALYZE, as described below.

Though, having that round (x 1000) difference, my bet is that you have different statistics target whether on database, table or columns, see:

The extent of analysis can be controlled by adjusting the default_statistics_target configuration variable, or on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS (see ALTER TABLE). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value is 100, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are never used as part of the WHEREGROUP BY, or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns.

Here is some help on how to see statistics per column:


Check if this is the case.





El 2 dic 2016, a las 1:26, Bill Measday <bill@measday.com> escribió:

Thanks Tom.

First, this wasn't a migration but new db loaded from scratch (if that matters).

As per the end of the original post "I have vacuum analysed both tables".  I assume this is what you meant?

My gut feel was that it isn't a postgis issue since the third example I gave uses the index, but I will take it up with them too.

Rgds


Bill

On 2/12/2016 10:48 AM, Tom Lane wrote:
Bill Measday <bill@measday.com> writes:
Substantial different index use between 9.5 and 9.6
Maybe you missed an ANALYZE after migrating?  The plan difference
seems to be due to a vast difference in rowcount estimate for the
m_elevations condition:

      ->  Bitmap Heap Scan on m_elevations e
(cost=282802.21..37401439.43 rows=3512160 width=8)
      ->  Seq Scan on m_elevations e
(cost=10000000000.00..13296950520.12 rows=3512159563 width=8)
If you don't know where that factor-of-1000 came from, maybe take
it up with the postgis folk.  It'd mostly be coming out of their
selectivity estimation routines.

regards, tom lane



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance