reading EXPLAIN output

Поиск
Список
Период
Сортировка
От David Rysdam
Тема reading EXPLAIN output
Дата
Msg-id 43AAE662.5020504@ll.mit.edu
обсуждение исходный текст
Ответы Re: reading EXPLAIN output  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
merge join (cost=0.00..348650.65 rows=901849 width=12)
  merge cond {blah}
  join filter {blah}
     index scan using {blah index on blah} (cost=0.00..289740.65
rows=11259514 width=8)
     index scan using {blah index on blah} (cost=0.00..17229.93
rows=902085 width=8)

This query takes about 3 minutes to run and I'm trying to figure out
why.  From a tutorial and the docs, I gather that the "..largenum" part
is the number of page reads required, so I understand where 289740 and
17229 come from.  But what about 348650 page reads for the "merge
join"?  My conjecture is that the joined keys are being stored on disk
(if that's how the internals of postgresql works) and have to be re-read
for the rest of the query.  Is that right?  Does that mean I could speed
this up by giving more RAM to store it in?

When I do EXPLAIN ANALYZE, the actual values come out like this:

merge join: (actual time=170029.404..170029.404)
index scan: (actual time=27.653..84373.805)
index scan: (actual time=45.681..7026.928)

This seems to confirm that it's the final "merge join" that takes
forever.  Because it is writing to and reading from disk?

В списке pgsql-general по дате отправления:

Предыдущее
От: Jaime Casanova
Дата:
Сообщение: Re: Stored procedure
Следующее
От: Michael Fuhr
Дата:
Сообщение: Re: Sorting array field