Re: Troubles dumping a very large table.

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Troubles dumping a very large table.
Дата
Msg-id 14875.1230322730@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Troubles dumping a very large table.  ("Merlin Moncure" <mmoncure@gmail.com>)
Ответы Re: Troubles dumping a very large table.  (Dimitri Fontaine <dfontaine@hi-media.com>)
Список pgsql-performance
"Merlin Moncure" <mmoncure@gmail.com> writes:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb.  I'm curious
>> how you got the data in to start with --- were the values assembled on
>> the server side?

> Wouldn't binary style COPY be more forgiving in this regard?  (if so,
> the OP might have better luck running COPY BINARY)...

Yeah, if he's willing to use COPY BINARY directly.  AFAIR there is not
an option to get pg_dump to use it.  But maybe "pg_dump -s" together
with a manual dump of the table data is the right answer.  It probably
beats shoving some of the rows aside as he's doing now...

            regards, tom lane

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Merlin Moncure"
Дата:
Сообщение: Re: Troubles dumping a very large table.
Следующее
От: Greg Smith
Дата:
Сообщение: Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects