Re: out of memory error with large insert
| От | Sriram Dandapani |
|---|---|
| Тема | Re: out of memory error with large insert |
| Дата | |
| Msg-id | 6992E470F12A444BB787B5C937B9D4DF03C48C1A@ca-mail1.cis.local обсуждение исходный текст |
| Ответ на | out of memory error with large insert ("Sriram Dandapani" <sdandapani@counterpane.com>) |
| Список | pgsql-admin |
Some more interesting information.
The insert statement is issued with a jdbc callback to the postgres
database (because the application requires partial commits...equivalent
of autonomous transactions)
What I noticed was that the writer process when using the jdbc insert
was very active consuming a lot of memory
When I attempted the same insert within pgadmin manually, the writer
process was not on the top's list of processes.
Wonder if the jdbc callback causes Postgres to allocate memory
differently.
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Tuesday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert
"Sriram Dandapani" <sdandapani@counterpane.com> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.
If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards. Probably faster than retail checks
anyway ...
regards, tom lane
В списке pgsql-admin по дате отправления: