Re: Dealing with big tables

Поиск
Список
Период
Сортировка
От Mindaugas
Тема Re: Dealing with big tables
Дата
Msg-id E1Iyn92-00056Z-So@fenris.runbox.com
обсуждение исходный текст
Ответ на Re: Dealing with big tables  (Sami Dalouche <skoobi@free.fr>)
Список pgsql-performance
> my answer may be out of topic since you might be looking for a
> postgres-only solution.. But just in case....

  I'd like to stay with SQL.

> What are you trying to achieve exactly ? Is there any way you could
> re-work your algorithms to avoid selects and use a sequential scan
> (consider your postgres data as one big file) to retrieve each of the
> rows, analyze / compute them (possibly in a distributed manner), and
> join the results at the end ?

  I'm trying to improve performance - get answer from mentioned query
faster.

  And since cardinality is high (100000+ different values) I doubt that it
would be possible to reach select speed with reasonable number of nodes of
sequential scan nodes.

  Mindaugas

В списке pgsql-performance по дате отправления:

Предыдущее
От: Sami Dalouche
Дата:
Сообщение: Re: Dealing with big tables
Следующее
От: "Guillaume Smet"
Дата:
Сообщение: Re: Dealing with big tables