Strategies/Best Practises Handling Large Tables

Поиск
Список
Период
Сортировка
От Chitra Creta
Тема Strategies/Best Practises Handling Large Tables
Дата
Msg-id CABkVLeOVhMS6oubnyr4HrcHYtgF2WNScw9Vjv8wqWaOANFu7nQ@mail.gmail.com
обсуждение исходный текст
Ответы Re: Strategies/Best Practises Handling Large Tables  (Lonni J Friedman <netllama@gmail.com>)
Re: Strategies/Best Practises Handling Large Tables  (Ondrej Ivanič <ondrej.ivanic@gmail.com>)
Re: Strategies/Best Practises Handling Large Tables  (Ryan Kelly <rpkelly22@gmail.com>)
Re: Strategies/Best Practises Handling Large Tables  (John R Pierce <pierce@hogranch.com>)
Re: Strategies/Best Practises Handling Large Tables  (Shaun Thomas <sthomas@optionshouse.com>)
Re: Strategies/Best Practises Handling Large Tables  (Chris Travers <chris.travers@gmail.com>)
Список pgsql-general

Hi,

I currently have a table that is growing very quickly - i.e 7 million records in 5 days. This table acts as a placeholder for statistics, and hence the records are merely inserted and never updated or deleted.

Many queries are run on this table to obtain trend analysis. However, these queries are now starting to take a very long time (hours) to execute due to the size of the table.

I have put indexes on this table, to no significant benefit.  Some of the other strategies I have thought of:
1. Purge old data
2. Reindex
3. Partition
4. Creation of daily, monthly, yearly summary tables that contains aggregated data specific to the statistics required

Does anyone know what is the best practice to handle this situation?

I would appreciate knowledge sharing on the pros and cons of the above, or if there are any other strategies that I could put in place.

Thanking you in advance.

В списке pgsql-general по дате отправления:

Предыдущее
От: Alban Hertroys
Дата:
Сообщение: Re: Limit+Offset query wrong result in Postgres 9.0.3 ?
Следующее
От: Lonni J Friedman
Дата:
Сообщение: Re: Strategies/Best Practises Handling Large Tables