VACUUM ANALYZE out of memory

From: Michael Akinde <michael(dot)akinde(at)met(dot)no>
To: pgsql-hackers(at)postgresql(dot)org
Subject: VACUUM ANALYZE out of memory
Date: 2007-12-11 09:59:59
Message-ID: 475E5F9F.6050205@met.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:

INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of size 536870912.

INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of size 32.

Granted, our largeobject table is a bit large:

INFO: analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
18883 live rows and 409 dead rows; 3000 rows in sample, 730453802
estimated total rows

...but I trust that VACUUM ANALYZE doesn't try to read the entire table
into memory at once. :-) The machine was set up with 1.2 GB shared
memory and 1 GB maintenance memory, so I would have expected this to be
sufficient for the task (we will eventually set this up oa 64-bit
machine with 16 GB memory, but at the moment we are restricted to 32 bit).

This is currently running on PostgreSQL 8.3beta2, but since I haven't
seen this problem reported before, I guess this will also be a problem
in earlier versions. Have we run into a bug/limitation of the Postgres
VACUUM or is this something we might be able to solve via reconfiguring
the server/database, or downgrading the DBMS version.

I shall be trying to run a simple VACUUM later this evening, in order to
see whether that manages to complete. Unfortunately, due to the time it
takes to load data, it's not really practicable to shift servers at the
moment

A little background on the application:
We are building a raster-database to be used for storing weather and
water data. The raster data (2D matrices of floating points) are stored
using large objects and indexed using a values table (with multiple
dimensions: time, parameter, altitudes, etc). This is a technique I've
worked with successfully in the past, though in that case using an
Informix DBMS. My current employer is a strong proponent for Open
Software, which has led to our implementation of the current system on a
PostgreSQL DBMS (we will also be releasing our system as GPL in the near
future).

The test instance we are working on now is about 1 TB; we expect to
increase that by a factor of at least 5 within the first year of
operation, so we'd really like to ensure that we can get VACUUM working
(although the data is mostly going to be static on this installation, we
will have others that won't be).

Anyone with some insights on VACUUM FULL ANALYZE who can weigh in on
what is going wrong?

Regards,

Michael Akinde
----
Database Architect,
met.no

Attachment Content-Type Size
michael.akinde.vcf text/x-vcard 287 bytes

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stefan Kaltenbrunner 2007-12-11 10:17:55 Re: VACUUM ANALYZE out of memory
Previous Message Zeugswetter Andreas ADI SD 2007-12-11 09:54:00 Re: Problem of a server gettext message.