Hi...
Gigantic table woes again... I get
sc=> vacuum test_detail;
FATAL 1: palloc failure: memory exhausted
This is a very simple table too:
| word_id | int4 |4 |
| url_id | int4 |4 |
| word_count | int2 |2 |
while vacuuming a rather big table:
sc=> select count(*) from test_detail;
Field| Value
-- RECORD 0 --
count| 78444613
(1 row)
There is lots of free space on that drive:
/dev/sd1s1e 8854584 6547824 1598400 80% /scdb
The test_detail table is in a few files too...
-rw------- 1 postgres postgres 2147483648 May 9 23:28 test_detail
-rw------- 1 postgres postgres 2147483648 May 9 23:23 test_detail.1
-rw------- 1 postgres postgres 949608448 May 9 23:28 test_detail.2
I am not running out of swap space either...
under top the backend just keeps growing.
492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres
when it hit about 20 megs, it craps out. Swap space is 0% used, and I am
not even convinced this is using all 128 megs of ram either. Could
something like memory fragementation be an issue?
Does anyone have any ideas other than buying a gig of ram?