Re: [GENERAL] Improve PostGIS performance with 62 million rows?

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: [GENERAL] Improve PostGIS performance with 62 million rows?
Дата
Msg-id CACjxUsNOmjoHrMjJNmMR+Hso2oHRCr1qosSa6xDmdMB9q-V6VA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [GENERAL] Improve PostGIS performance with 62 million rows?  (Israel Brewster <israel@ravnalaska.net>)
Ответы Re: [GENERAL] Improve PostGIS performance with 62 million rows?  (Israel Brewster <israel@ravnalaska.net>)
Список pgsql-general
On Mon, Jan 9, 2017 at 11:49 AM, Israel Brewster <israel@ravnalaska.net> wrote:

> [load of new data]

>  Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=225998.319..225998.320 rows=1 loops=1)

> [...] I ran the query again [...]

>  Limit  (cost=354643835.82..354643835.83 rows=1 width=9) (actual
> time=9636.165..9636.166 rows=1 loops=1)

> So from four minutes on the first run to around 9 1/2 seconds on the second.
> Presumably this difference is due to caching?

It is likely to be, at least in part.  Did you run VACUUM on the
data before the first run?  If not, hint bits may be another part
of it.  The first access to each page after the bulk load would
require some extra work for visibility checking and would cause a
page rewrite for the hint bits.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


В списке pgsql-general по дате отправления:

Предыдущее
От: Tom DalPozzo
Дата:
Сообщение: Re: [GENERAL] checkpoint clarifications needed
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: [GENERAL] Matching indexe for timestamp