Re: Increasing GROUP BY CHAR columns speed

Поиск
Список
Период
Сортировка
От Greg Stark
Тема Re: Increasing GROUP BY CHAR columns speed
Дата
Msg-id 4136ffa0811300502i2d8741fcn86a3a3efb281dd5c@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Increasing GROUP BY CHAR columns speed  ("Andrus" <kobruleht2@hot.ee>)
Список pgsql-performance
On Sat, Nov 29, 2008 at 6:43 PM, Andrus <kobruleht2@hot.ee> wrote:
>> I'm still not sure why the planner chose to sort rather than hash with
>> oversized work_mem (is there an implied order in the query results I
>> missed?).
>
> Group by contains decimal column exchrate. Maybe pg is not capable to use
> hash with numeric datatype.

It is in 8.3. I think sorting was improved dramatically since 8.1 as well.

> I fixed this by adding cast to :::float
>
> bilkaib.exchrate:::float
>
> In this case query is much faster.
> Hopefully this will not affect to result since numeric(13,8) can casted to
> float without data loss.

That's not true. Even pretty simple values like 1.1 cannot be
represented precisely in a float. It would display properly though
which might be all you're concerned with here. I'm not sure whether
that's true for all values in numeric(13,8) though

Do you really need to be grouping on so many columns? If they're
normally all the same perhaps you can do two queries, one which
fetches the common values without any group by, just a simple
aggregate, and a second which groups by all these columns but only for
the few exceptional records.

You could avoid the collation support on the char() columns by casting
them to bytea first. That might be a bit of a headache though.

--
greg

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Andrus"
Дата:
Сообщение: Re: Increasing GROUP BY CHAR columns speed
Следующее
От: tmp
Дата:
Сообщение: Query optimization