Re: VACUUM FULL out of memory

From: Andrew Sullivan <ajs(at)crankycanuck(dot)ca>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: VACUUM FULL out of memory
Date: 2008-01-07 15:57:53
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jan 07, 2008 at 10:40:23AM +0100, Michael Akinde wrote:
> As suggested, I tested a VACUUM FULL ANALYZE with 128MB shared_buffers
> and 512 MB reserved for maintenance_work_mem (on a 32 bit machine with 4
> GB RAM). That ought to leave more than enough space for other processes
> in the system. Again, the system fails on the VACUUM with the following
> error (identical to the error we had when maintenance_work_mem was very
> low.
> INFO: vacuuming "pg_catalog.pg_largeobject"
> ERROR: out of memory
> DETAIL: Failed on request of size 536870912

Something is using up the memory on the machine, or (I'll bet this is more
likely) your user (postgres? Whatever's running the postmaster) has a
ulimit on its ability to allocate memory on the machine.

> It strikes me as somewhat worrying that VACUUM FULL ANALYZE has so much
> trouble with a large table. Granted - 730 million rows is a good deal -

No, it's not really that big. I've never seen a problem like this. If it
were the 8.3 beta, I'd be worried; but I'm inclined to suggest you look at
the OS settings first given your set up.

Note that you should almost never use VACUUM FULL unless you've really
messed things up. I understand from the thread that you're just testing
things out right now. But VACUUM FULL is not something you should _ever_
need in production, if you've set things up correctly.


In response to


Browse pgsql-hackers by date

  From Date Subject
Next Message Kevin Grittner 2008-01-07 16:07:24 Re: OUTER JOIN performance regression remains in 8.3beta4
Previous Message Michael Akinde 2008-01-07 15:42:24 Re: VACUUM FULL out of memory