Re: BUG #2225: Backend crash -- BIG table

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: BUG #2225: Backend crash -- BIG table
Дата
Msg-id 27525.1138988576@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: BUG #2225: Backend crash -- BIG table  (Stephan Szabo <sszabo@megazone.bigpanda.com>)
Ответы Re: BUG #2225: Backend crash -- BIG table  (Patrick Rotsaert <patrick.rotsaert@arrowup.be>)
Список pgsql-bugs
Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
>> pointspp=# explain select trid, count(*) from pptran group by trid
>> having count(*) > 1;
>> QUERY PLAN
>> --------------------------------------------------------------------------
>> HashAggregate  (cost=1311899.28..1311902.78 rows=200 width=18)
>> Filter: (count(*) > 1)
>> ->  Seq Scan on pptran  (cost=0.00..1039731.02 rows=36289102 width=18)
>> (3 rows)

>>> Failing that, how many rows should the above return?

>> That is exactly what I am trying to find out. I can only guess that, but
>> it should not be more than a couple of 10k rows.

The problem is that the HashAgg will have to maintain a counter for
every distinct value of trid, not just those that occur more than
once.  So if there are a huge number of one-time-only values you could
still blow out memory (and HashAgg doesn't currently know how to spill
to disk).

That "rows=200" estimate looks suspiciously like a default.  Has this
table been ANALYZEd recently?  I'd expect the planner not to choose
HashAgg if it has a more realistic estimate of the number of groups.

            regards, tom lane

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Stephan Szabo
Дата:
Сообщение: Re: BUG #2225: Backend crash -- BIG table
Следующее
От: Tom Lane
Дата:
Сообщение: Re: BUG #2225: Backend crash -- BIG table