Обсуждение: Clustering and backup with large objects

Поиск
Список
Период
Сортировка

Clustering and backup with large objects

От
"Marco Bizzarri"
Дата:
Hi all.

I'm working on a document management application (PAFlow). The
application is Zope based and uses PostgreSQL as its (main) storage
system.

PostgreSQL must contain both profile data for documents and the
documents themselves. Documents are stored as large objects in
PostgreSQL.

Up to now, we've done backups using the pg_dump, and that was fine.
However, a number of installations have databases which have backups
which are increasingly large. Therefore, making a complete backup (and
a restore) is more and more time consuming.

PostgreSQL, at the moment, is 7.4.x We will move to newer version, but
I think we will not be able to migrate all customers to 8.1.x soon.

I've read the chapter on backups and large backups. Is there any
strategy for doing large backups, aside from those mentioned in the
documentation?

I would also like to ask possible solutions for clustering under
PostgreSQL. My use case scenario would be the following:

1) application makes comparably few writes wrt reads (1 to 10);
2) application is multithreaded, and any thread can do read and write;
3) database contains large objects (as mentioned before);
4) clustering is done for improving performance, rather than availability.

Thanks for your attention.

Regards
Marco
--
Marco Bizzarri
http://notenotturne.blogspot.com/