Re: How to track down inconsistent performance?

Поиск
Список
Период
Сортировка
От Ron Snyder
Тема Re: How to track down inconsistent performance?
Дата
Msg-id F888C30C3021D411B9DA00B0D0209BE8026E303C@cvo-exchange.cvo.roguewave.com
обсуждение исходный текст
Ответ на How to track down inconsistent performance?  (Ron Snyder <snyder@roguewave.com>)
Список pgsql-general

> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]

> >> I would definitely think that the planner would think so
> --- are you
> >> forcing enable_seqscan off?
>
> > Forgot to answer this-- no, we're not forcing it off.
> > Maybe we need to
> > force it to use a seqscan for this query?
>
> It would be interesting to try that just for comparison purposes.
> I'd like to know the difference in the planner's cost estimate,
> as well as the actual runtime.

Well, assuming that I did this correctly (just turning off indexscan), the
results are below (I ran it twice to see what benefits from having some it
it in memory might have, but the results were almost identical):

bash-2.05$ time psql quickview -c "set enable_indexscan=off; explain analyze
select distinct os,compiler,stdlibtype,threadlib from builds where
product='math' and visible=true order by 1 asc;"
NOTICE:  QUERY PLAN:

Unique  (cost=273438.18..273522.81 rows=846 width=50) (actual
time=140567.87..140593.49 rows=202 loops=1)
  ->  Sort  (cost=273438.18..273438.18 rows=8463 width=50) (actual
time=140567.86..140576.85 rows=5934 loops=1)
        ->  Seq Scan on builds  (cost=0.00..272727.58 rows=8463 width=50)
(actual time=56.38..140347.16 rows=5934 loops=1)
Total runtime: 140620.56 msec

EXPLAIN

real    2m20.672s
user    0m0.010s
sys     0m0.000s

>
> Your results do make it look like the difference between "fast" and
> "slow" cases is just whether the table and index data are in
> disk buffer
> cache or not.  What shared_buffers setting are you using for Postgres?
> Have you tried experimenting with other values?  What's the total RAM
> in the box?  The ultimate answer might just be "you need to
> buy more RAM" ...

Shared_buffers is set to 8096, the machine has 1 gig of ram. I don't know if
it matters, but there are about 170 connections at any one time, although
most of them are idle at any moment.  About 10 of them are typically idle
inside of a transaction (if that matters).  Does the following information
help?:

bash-2.05$ psql quickview -c "select relid,indexrelname from
pg_statio_user_indexes where indexrelname='builds_visible_product';"
  relid   |      indexrelname
----------+------------------------
 25192249 | builds_visible_product
(1 row)

bash-2.05$ find . -type f -name 25192249 -exec ls -ald {} \;
-rw-------    1 pgqv     postgres 1073741824 Apr 28 15:43
./base/23527426/25192249

We do have more memory on order, but I'd like to know of a more definite
method of predicting what the settings and memory need to be. (based on the
size of the builds table in the filesystem above, I'm guessing that I've got
my answer.)

If we can segment the data so that the builds table isn't so big, could that
lead to a more efficient use of memory? What about deleting some indices
that exist but are not being used?

-ron

>
>             regards, tom lane
>

В списке pgsql-general по дате отправления:

Предыдущее
От: Brent Wood
Дата:
Сообщение: Re: intel vs amd benchmark for pg server
Следующее
От: Martijn van Oosterhout
Дата:
Сообщение: Re: icps, shmmax and shmall - Shared Memory tuning