Re: pg_dump fundenental question

Поиск
Список
Период
Сортировка
От Francisco Olarte
Тема Re: pg_dump fundenental question
Дата
Msg-id CA+bJJbxxLSVpXkdOhQ3oNPt8rvVSzXPjw_hU=LzaFB0rsu56Ow@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_dump fundenental question  ("J. Cassidy" <sean@jdcassidy.eu>)
Список pgsql-general
On Tue, Jul 5, 2016 at 7:39 PM, J. Cassidy <sean@jdcassidy.eu> wrote:
> My input (source) DB is  1TB in size, using the options as stated in my
> original email (i.e. no compression it would seem) the output file size is
> "only" 324GB.
> I presume all of the formatting/indices have been ommited. As I said before,
> I can browse the backup file with less/heat/cat/tail etc.

It's been told and you are nearly right. It's normal for a backup to
be about a third of the database size, even less on busy or very
indexed databases. Many effects come into place:

- Indices on the backup are, approximately, a 'create index' line.
- Data in the real db is stored in pages, which have some overhead and
some free space in them.
- Data on the backup is normally stored in 'copy' format, which is
normally more compact than the binary format used in the database
pages ( but slower and less flexible ).

Also, all the backup formats have more or less the same information as
the 'plain' format, and ocupy more or less the same WHEN UNCOMPRESSED.
The main advantage of the custom ( and somehow the tar formats ) is
that it stores every piece of information separated ( and potentially
compressed, it's a lot like a zip file ) and so can perform selective
restores ( you can select what to restore, and playing with the -l /
-L options even on what order, which gives you a lot of play ).

Francisco Olarte.


В списке pgsql-general по дате отправления:

Предыдущее
От: Christian Castelli
Дата:
Сообщение: Re: Avoid deadlocks on alter table
Следующее
От: arnaud gaboury
Дата:
Сообщение: Broken after upgrade