Re: Data Warehousing

Поиск
Список
Период
Сортировка
От Rob Kirkbride
Тема Re: Data Warehousing
Дата
Msg-id e0b3cb2b0709030148g2a458acft9a5eea3075fc4ee4@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Data Warehousing  ("Scott Marlowe" <scott.marlowe@gmail.com>)
Ответы Re: Data Warehousing  ("Andrej Ricnik-Bay" <andrej.groups@gmail.com>)
Список pgsql-general
On 03/09/07, Scott Marlowe <scott.marlowe@gmail.com> wrote:
On 9/3/07, Rob Kirkbride <rob.kirkbride@gmail.com> wrote:
> Hi,
>
> I've got a postgres database collected logged data. This data I have to keep
> for at least 3 years. The data in the first instance is being recorded in a
> postgres cluster. This then needs to be moved a reports database server for
> analysis. Therefore I'd like a job to dump data on the cluster say every
> hour and record this is in the reports database. The clustered database
> could be purged of say data more than a week old.
>
> So basically I need a dump/restore that only appends new data to the reports
> server database.
>
> I've googled but can't find anything, can anyone help?

You might find an answer in partitioning your data.  There's a section
in the docs on it.  Then you can just dump the old data from the
newest couple of partitions if you're partitioning by week, and dump
anything older with a simple delete where date < now() - interval '1
week' or something like that.


We're using hibernate to write to the database. Partitioning looks like it will be too much of a re-architecture. In reply to Andrej we do have a logged_time entity in the required tables. That being the case how does that help me with the tools provided?

Might I have to write a custom JDBC application to do the data migration?

Rob


В списке pgsql-general по дате отправления:

Предыдущее
От: "Albe Laurenz"
Дата:
Сообщение: Re: invalid byte sequence for encoding "UTF8": 0xff
Следующее
От: "Andrej Ricnik-Bay"
Дата:
Сообщение: Re: Data Warehousing