a back up question

Поиск
Список
Период
Сортировка
От Martin Mueller
Тема a back up question
Дата
Msg-id 001039C7-15DF-4A44-B0B9-3E100C9D68D3@northwestern.edu
обсуждение исходный текст
Ответы Re: a back up question  ("David G. Johnston" <david.g.johnston@gmail.com>)
Re: a back up question  (Carl Karsten <carl@personnelware.com>)
Re: a back up question  (Karsten Hilbert <Karsten.Hilbert@gmx.net>)
Список pgsql-general

Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as  “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).

 

Is 10GB a good practical limit to keep in mind?

 

 

В списке pgsql-general по дате отправления:

Предыдущее
От: John R Pierce
Дата:
Сообщение: Re: Feature idea: Dynamic Data Making
Следующее
От: "David G. Johnston"
Дата:
Сообщение: Re: a back up question