Re: select count() out of memory

Поиск
Список
Период
Сортировка
От Paul Boddie
Тема Re: select count() out of memory
Дата
Msg-id 1193443238.580060.312510@19g2000hsx.googlegroups.com
обсуждение исходный текст
Ответ на Re: select count() out of memory  (tfinneid@student.matnat.uio.no)
Список pgsql-general
On 25 Okt, 17:36, tfinn...@student.matnat.uio.no wrote:
>
> The design is based on access patterns, i.e. one partition represents a
> group of data along a discrete axis, so the partitions are the perfect for
> modeling that. Only the last partition will be used on normal cases. The
> previous partitions only need to exists until the operator deletes them,
> which will be sometime between 1-6 weeks.

This has been interesting reading because I'm working on a system
which involves a more batch-oriented approach in loading the data,
where I've found partitions to be useful both from a performance
perspective (it looks like my indexes would be inconveniently big
otherwise for the total volume of data) and from an administrative
perspective (it's convenient to control the constraints for discrete
subsets of my data). However, if all but the most recent data remains
relatively stable, why not maintain your own statistics for each
partition or, as someone else suggested, use the pg_class statistics?

I'd just be interested to hear what the best practices are when tables
get big and where the access patterns favour the most recently loaded
data and/or reliably identifiable subsets of the data, as they seem to
in this case and in my own case. The various tuning guides out there
have been very useful, but isn't there a point at which partitioning
is inevitable?

Paul


В списке pgsql-general по дате отправления:

Предыдущее
От: "U P C Bharat"
Дата:
Сообщение: reg. rewrite rules
Следующее
От: ptjm@interlog.com (Patrick TJ McPhee)
Дата:
Сообщение: Re: Selecting K random rows - efficiently!