Обсуждение: BUG #7761: Out of memory when running pg_dump

Поиск
Список
Период
Сортировка

BUG #7761: Out of memory when running pg_dump

От
m.vanklink@vision-development.nl
Дата:
The following bug has been logged on the website:

Bug reference:      7761
Logged by:          Michel van Klink
Email address:      m.vanklink@vision-development.nl
PostgreSQL version: 8.4.3
Operating system:   Windows
Description:        =


When running pg_dump it crashes with the following output:

pg_dump: dumping contents of table crm_department
pg_dump: dumping contents of table crm_department_article
pg_dump: dumping contents of table crm_department_extra_info
pg_dump: dumping contents of table crm_department_output
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 536870912.
pg_dump: The command was: COPY public.crm_department_output
(crm_department_output_id, document_type, document_format_type,
crm_department_id, document_date, origin_type, origin_id, owner_id,
og_creation_user, log_creation_dt, log_modification_user,
log_modification_dt, document, checkout_dt, checkout_user, checkin_dt, name,
document_sequence_integer, amount_excl_vat, amount_incl_vat, lb_currency_id,
currency_amount_excl_vat, currency_amount_incl_vat, document_reference,
crm_contactperson_id, mailed_dt, faxed_dt, cost_price, printed_dt,
start_date, end_date) TO stdout;
pg_dump: *** aborted because of error

We don't think this is corruption of data because when running the same COPY
command from pgAdmin it also crashes with the same error. But when adding
WITH BINARY to this command it runs okay. This table contains extremely wide
columns, around 130 megabytes.

Kind Regards,
Michel van Klink
Vision Development

Re: BUG #7761: Out of memory when running pg_dump

От
Tom Lane
Дата:
m.vanklink@vision-development.nl writes:
> When running pg_dump it crashes with the following output:

> pg_dump: dumping contents of table crm_department
> pg_dump: dumping contents of table crm_department_article
> pg_dump: dumping contents of table crm_department_extra_info
> pg_dump: dumping contents of table crm_department_output
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  out of memory
> DETAIL:  Failed on request of size 536870912.

That doesn't look like a crash, it looks like out-of-memory.

> We don't think this is corruption of data because when running the same COPY
> command from pgAdmin it also crashes with the same error. But when adding
> WITH BINARY to this command it runs okay. This table contains extremely wide
> columns, around 130 megabytes.

There is a limit to the size of column you can manipulate without
running out of memory, and it is context-dependent, so this behavior is
neither surprising nor a bug.  The reason COPY is failing while COPY
BINARY doesn't is that the former requires multiple transient copies
of data during I/O conversion, output line formatting, etc.  At a couple
hundred megabytes apiece that'll add up pretty quickly.

If you don't want to reconsider your data storage layout, a possible
solution is to move to a 64-bit build of Postgres, so that you're not up
against a 4GB total address space limit.  (I'm not sure that 8.4.x
supports 64-bit Windows, but you could definitely get such a build in
more recent PG versions.)  Pending such a fix, you might be able to make
more space available for COPY workspace by reducing shared_buffers and
other settings that control consumption of shared memory space.

            regards, tom lane