From: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
---|---|
To: | Michael Akinde <michael(dot)akinde(at)met(dot)no> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: VACUUM ANALYZE out of memory |
Date: | 2007-12-11 11:49:05 |
Message-ID: | 475E7931.5090602@kaltenbrunner.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Michael Akinde wrote:
> Thanks for the rapid responses.
>
> Stefan Kaltenbrunner wrote:
>> this seems simply a problem of setting maintenance_work_mem too high
>> (ie higher than what your OS can support - maybe an
>> ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem
>> to say 128MB and retry.
>> If you promise postgresql that it can get 1GB it will happily try to
>> use it ...
> I set up the system together with one of our Linux sysOps, so I think
> the settings should be OK. Kernel.shmmax is set to 1.2 GB, but I'll get
> him to recheck if there could be any other limits he has forgotten to
> increase.
>
> The way the process was running, it seems to have basically just
> continually allocated memory until (presumably) it broke through the
> slightly less than 1.2 GB shared memory allocation we had provided for
> PostgreSQL (at least the postgres process was still running by the time
> resident size had reached 1.1 GB).
>
> Incidentally, in the first error of the two I posted, the shared memory
> setting was significantly lower (24 MB, I believe). I'll try with 128 MB
> before I leave in the evening, though (assuming the other tests I'm
> running complete by then).
this is most likely not at all related to your shared memory settings
but to your setting of maintenance_work_mem which is the amount of
memory a single backend(!) can use for maintainance operations (which
VACUUM is for example).
notice that your first error refers to an allocation of about 500MB
which your ulimit/kernel process limit simply might not be able to give
a single process.
And for very large tables VACUUM FULL is generally not a good idea at
all - either look into regular normal vacuum scheduling or if you need
to recover from a a bloated database use a command that forced a rewrite
of the table (like CLUSTER) which will be heaps faster but also require
about twice the amount of diskspace.
Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2007-12-11 11:49:28 | Re: VACUUM ANALYZE out of memory |
Previous Message | Michael Akinde | 2007-12-11 11:30:43 | Re: VACUUM ANALYZE out of memory |