Re: 7.4.6 pg_dump failed
| От | Tom Lane |
|---|---|
| Тема | Re: 7.4.6 pg_dump failed |
| Дата | |
| Msg-id | 15489.1106756375@sss.pgh.pa.us обсуждение исходный текст |
| Ответ на | 7.4.6 pg_dump failed (Marty Scholes <marty@outputservices.com>) |
| Список | pgsql-admin |
Marty Scholes <marty@outputservices.com> writes:
> A pg_dump of one table ran for 28:53:29.50 and produced a 30 GB dump
> before it aborted with:
> pg_dump: dumpClasses(): SQL command failed
> pg_dump: Error message from server: out of memory for query result
> pg_dump: The command was: FETCH 100 FROM _pg_dump_cursor
Even though it says "from server", this is actually an out-of-memory
problem inside pg_dump, or more specifically inside libpq.
> The table contains a text field that could contain several hundred MB of
> data, although always less than 2GB.
"Could contain"? What's the actual maximum field width, and how often
do very wide values occur? I don't recall the exact space allocation
algorithms inside libpq, but I'm wondering if it could choke on such a
wide row.
You might have better luck if you didn't use -d.
regards, tom lane
В списке pgsql-admin по дате отправления: