Обсуждение: Maybe a Vacuum bug in 6.3.2
Hi... Gigantic table woes again... I get sc=> vacuum test_detail; FATAL 1: palloc failure: memory exhausted This is a very simple table too: | word_id | int4 |4 | | url_id | int4 |4 | | word_count | int2 |2 | while vacuuming a rather big table: sc=> select count(*) from test_detail; Field| Value -- RECORD 0 -- count| 78444613 (1 row) There is lots of free space on that drive: /dev/sd1s1e 8854584 6547824 1598400 80% /scdb The test_detail table is in a few files too... -rw------- 1 postgres postgres 2147483648 May 9 23:28 test_detail -rw------- 1 postgres postgres 2147483648 May 9 23:23 test_detail.1 -rw------- 1 postgres postgres 949608448 May 9 23:28 test_detail.2 I am not running out of swap space either... under top the backend just keeps growing. 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres when it hit about 20 megs, it craps out. Swap space is 0% used, and I am not even convinced this is using all 128 megs of ram either. Could something like memory fragementation be an issue? Does anyone have any ideas other than buying a gig of ram?
Michael Richards <miker@scifair.acadiau.ca> writes: > I am not running out of swap space either... > under top the backend just keeps growing. > 492 postgres 85 0 16980K 19076K RUN 1:43 91.67% 91.48% postgres > when it hit about 20 megs, it craps out. Sounds to me like you are hitting a kernel-imposed limit on process memory size. This should be reconfigurable; check your kernel parameter settings. You'll probably find it's set to 20Mb ... or possibly 16Mb for data space, or some such. Set it to some more realistic fraction of your available swap space. In the longer term, however, it's disturbing that vacuum evidently needs space proportional to the table size. Can anything be done about that? Someday I might want to have huge tables under Postgres... regards, tom lane