Re: VACUUM FULL out of memory

Поиск
Список
Период
Сортировка
От Andrew Sullivan
Тема Re: VACUUM FULL out of memory
Дата
Msg-id 20080107155753.GG18581@crankycanuck.ca
обсуждение исходный текст
Ответ на Re: VACUUM FULL out of memory  (Michael Akinde <michael.akinde@met.no>)
Список pgsql-hackers
On Mon, Jan 07, 2008 at 10:40:23AM +0100, Michael Akinde wrote:
> As suggested, I tested a VACUUM FULL ANALYZE with 128MB shared_buffers 
> and 512 MB reserved for maintenance_work_mem (on a 32 bit machine with 4 
> GB RAM). That ought to leave more than enough space for other processes 
> in the system. Again, the system fails on the VACUUM with the following 
> error (identical to the error we had when maintenance_work_mem was very 
> low.
> 
> INFO:  vacuuming "pg_catalog.pg_largeobject"
> ERROR:  out of memory
> DETAIL:  Failed on request of size 536870912

Something is using up the memory on the machine, or (I'll bet this is more
likely) your user (postgres?  Whatever's running the postmaster) has a
ulimit on its ability to allocate memory on the machine.  

> It strikes me as somewhat worrying that VACUUM FULL ANALYZE has so much 
> trouble with a large table. Granted - 730 million rows is a good deal - 

No, it's not really that big.  I've never seen a problem like this.  If it
were the 8.3 beta, I'd be worried; but I'm inclined to suggest you look at
the OS settings first given your set up.

Note that you should almost never use VACUUM FULL unless you've really
messed things up.  I understand from the thread that you're just testing
things out right now.  But VACUUM FULL is not something you should _ever_
need in production, if you've set things up correctly.

A




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andrew Sullivan
Дата:
Сообщение: Re: Dynamic Partitioning using Segment Visibility Maps
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: OUTER JOIN performance regression remains in 8.3beta4