============================================================================
POSTGRESQL BUG REPORT TEMPLATE
============================================================================
Your name : Brian Hirt
Your email address : bhirt@berkhirt.com
Category : unknown
Severity : non-critical
Summary: pg_dump gets huge memory footprint (200MB and more)
System Configuration
--------------------
Operating System : Linux 2.0.36
PostgreSQL version : 6.4.2
Compiler used : gcc
Hardware:
---------
PII 400Mhz 128MB Mem, 5GB HD
Versions of other tools:
------------------------
--------------------------------------------------------------------------
Problem Description:
--------------------
Not really sure if this is a bug....
pg dump's memory footprint grows to a huge size and eventually dumps when you dump a large table (1,000,000+ rows,
250MB)from a remote machine. This does not happen if the dump is performed from the local machine which pg_dump is
installedon.
I have tracked down the source of the problem. pg_dump does a select * from large_table. It seems that the entire
resultset it created in memory before pg_dump does anything with it. In fact this seems a side effect of the way
postgresworks since the same thing happens when I do a select * from large_table in psql.
--------------------------------------------------------------------------
Test Case:
----------
create a tabele with 1,000,000 rows and pg_dump it from a remote host
--------------------------------------------------------------------------
Solution:
---------
Hmmm.... I have some ideas, but don't know postgres well enough.
Are there cursors that will allow the data to be retrieved bit by bit instead of one huge chunk?
--------------------------------------------------------------------------