Re: VACUUM FULL out of memory

From: Andrew Sullivan <ajs(at)crankycanuck(dot)ca>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: VACUUM FULL out of memory
Date: 2008-01-08 14:48:05
Message-ID: 20080108144805.GB23841@crankycanuck.ca
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Jan 08, 2008 at 09:50:07AM +0100, Michael Akinde wrote:
> stack size (kbytes, -s) 8192

Perhaps this is the issue? (I don't know.) Also, this _is_ for the
postgres user, right? That's the relevant one: the one that's actually
running the back end process.

Also, are you sure there's nothing else in the way? I don't remember what
OS you're using. On AIX, for instance, there's some _other_ dopey setting
that allows you to control user resource consumption as well, and it means
that ulimit's answers are not the full story. (I learned this through
painful experience, and confess it's one of the many reasons I think AIX
should be prounounced as one word, rather than three letters.)

> Andrew Sullivan wrote:
> > Something is using up the memory on the machine, or (I'll bet this is
> more
> > likely) your user (postgres? Whatever's running the postmaster) has a
> > ulimit on its ability to allocate memory on the machine.
>
> If one looks at the system resources while the VACUUM FULL is going up,
> its pretty obvious that its a postgres process going on a memory
> allocation rampage that eats up all the resources.

Of course VACUUM FULL is eating up as much memory as it can: it's moving a
lot of data around. But is it in fact exhausting memory on the machine?
There are only two possibilities: either there's something else that is
preventing that allocation, or else you've run into a case so unusual that
nobody else has ever seen it. The data you're talking about isn't that big:
I've run similar-sized databases on my laptop without pain.

> Or in this case: if VACUUM FULL is never required (except in very
> special circumstances), it might be a good idea not to have VACUUM
> recommend running it (cf. the VACUUM I ran before New Year on a similar
> size table).

The suggestion you see there, though, is in fact one of the cases where you
might in fact want to run it. That is,

> WARNING: relation "pg_catalog.pg_largeobject" contains more than
> "max_fsm_pages" pages with useful free space HINT: Consider using VACUUM
> FULL on this relation or increasing the configuration parameter
> "max_fsm_pages".

what it is saying is that a regular vacuum can no longer recover all the
dead pages in the table, and if you want that space back and marked usable
on your disk, you have to run VACUUM FULL (or, in fact, CLUSTER, or else
dump and reload the table. But one of these). Note that I said that, if
you have things configured _correctly_, you shouldn't have to run VACUUM
FULL except in unusual circumstances. That doesn't mean "never". The
problem here is an historical one: you have a "hangover" from previous
missed maintenance or sub-optimal vacuum scheduling. In those cases, you
may want to perform VACUUM FULL, provided you understand the potential side
effects (like possibly slower inserts initially, and some possible index
bloat).

A

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Markus Schiltknecht 2008-01-08 15:40:15 Re: Proposal - libpq Type System beta-0.8a (was PGparam)
Previous Message Andrew Chernow 2008-01-08 14:38:16 Proposal - libpq Type System beta-0.8a (was PGparam)