Обсуждение: out of memory error with large insert
Hi
On a large transaction involving an insert of 8 million rows, after a while Postgres complains of an out of memory error.
Failed on request of size 32
I get no other message.
Shmmax is set to 1 Gb
Shared_buffers set to 50000
Max memory on box is 4Gb..Postgres is the only major application running other than a jvm with minimal heap.
Other than breaking this transaction into smaller bits(which may not be possible as this scenario is not encountered often), am I missing something.
Sriram
"Sriram Dandapani" <sdandapani@counterpane.com> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster than retail checks
anyway ...
regards, tom lane
Some more interesting information.
The insert statement is issued with a jdbc callback to the postgres
database (because the application requires partial commits...equivalent
of autonomous transactions)
What I noticed was that the writer process when using the jdbc insert
was very active consuming a lot of memory
When I attempted the same insert within pgadmin manually, the writer
process was not on the top's list of processes.
Wonder if the jdbc callback causes Postgres to allocate memory
differently.
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert
"Sriram Dandapani" <sdandapani@counterpane.com> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster than retail checks
anyway ...
regards, tom lane