Re: Purpose of pg_dump tar archive format?

Поиск
Список
Период
Сортировка
От Gavin Roy
Тема Re: Purpose of pg_dump tar archive format?
Дата
Msg-id CAFVAjJGcpV1V9tD8r-9mNH64KcYaRQE2D1pTM0Lq8n-GDAWx0g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Purpose of pg_dump tar archive format?  (Ron Johnson <ronljohnsonjr@gmail.com>)
Список pgsql-general

On Tue, Jun 4, 2024 at 7:36 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:
On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy <gavinr@aweber.com> wrote:

On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson <ronljohnsonjr@gmail.com> wrote:

But why tar instead of custom? That was part of my original question.

I've found it pretty useful for programmatically accessing data in a dump for large databases outside of the normal pg_dump/pg_restore workflow. You don't have to seek through one large binary file to get to the data section to get at the data.

Interesting.  Please explain, though, since a big tarball _is_ "one large binary file" that you have to sequentially scan.  (I don't know the internal structure of custom format files, and whether they have file pointers to each table.)

Not if you untar it first.
 
Is it because you need individual .dat "COPY" files for something other than loading into PG tables (since pg_restore --table=xxxx does that, too), and directory format archives can be inconvenient?

In the past I've used it for data analysis outside of Postgres.
--
Gavin M. Roy
CTO
AWeber

В списке pgsql-general по дате отправления:

Предыдущее
От: Jeremy Smith
Дата:
Сообщение: Re: Poor performance after restoring database from snapshot on AWS RDS
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Variant (Untyped) parameter for function/procedure