Re: strange query plan with LIMIT

Поиск
Список
Период
Сортировка
От tv@fuzzy.cz
Тема Re: strange query plan with LIMIT
Дата
Msg-id 30c6caeda38eb66a15665d147a5db451.squirrel@sq.gransy.com
обсуждение исходный текст
Ответ на strange query plan with LIMIT  (anthony.shipman@symstream.com)
Ответы Re: strange query plan with LIMIT  (anthony.shipman@symstream.com)
Список pgsql-performance
> Version: PostgreSQL 8.3.5 (mammoth replicator)
>
> Schema:
>
> CREATE TABLE tdiag (
>     diag_id             integer DEFAULT nextval('diag_id_seq'::text),
>     create_time        timestamp with time zone default now(),    /* time this
> record
> was created */
>     diag_time           timestamp with time zone not null,
>     device_id           integer,                /* optional */
>     fleet_id            integer,                /* optional */
>     customer_id         integer,                /* optional */
>     module              character varying,
>     node_kind           smallint,
>     diag_level          smallint,
>     tag                 character varying not null default '',
>     message             character varying not null default '',
>     options             text,
>
>     PRIMARY KEY (diag_id)
> );
>
> create index tdiag_create_time   ON tdiag(create_time);
>
> The number of rows is over 33 million with time stamps over the past two
> weeks.
>
> The create_time order is almost identical to the id order.  What I want
> to find is the first or last entry by id in a given time range. The
> query I am having a problem with is:

Hi,

why are you reposting this? Pavel Stehule already recommended you to run
ANALYZE on the tdiag table - have you done that? What was the effect?

The stats are off - e.g. the bitmap scan says

   ->  Bitmap Heap Scan on tdiag  (cost=25763.48..638085.13 rows=1141019
width=114) (actual time=43.232..322.441 rows=86530 loops=1)

so it expects to get 1141019 rows but it gets 86530, i.e. about 7% of the
expected number. That might be enough to cause bad plan choice and thus
performance issues.

And yet another recommendation - the sort is performed on disk, so give it
more work_mem and it should be much faster (should change from "merge
sort" to "quick sort"). Try something like work_mem=20MB and see if it
does the trick.

regards
Tomas


В списке pgsql-performance по дате отправления:

Предыдущее
От: anthony.shipman@symstream.com
Дата:
Сообщение: strange query plan with LIMIT
Следующее
От: Marcos Ortiz
Дата:
Сообщение: Re: i want to ask monitory peformance memory postgresql with automatically