Re: memory leak while using vaccum

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Achim Krümmel <akruemmel(at)dohle(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: memory leak while using vaccum
Date: 2001-08-24 00:46:14
Message-ID: 415.998613974@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Achim =?iso-8859-1?Q?Kr=FCmmel?= <akruemmel(at)dohle(dot)com> writes:
> when using "vacuum analyze <tablename>" on very large tables (I have one
> with about 30GB) the memory usage increases continues until no memory is
> left and the kernel stops this process.

I don't have 30Gb to spare, but I set up a table of the same schema with
a couple hundred meg of toy data and vacuumed it. I didn't see any
significant memory usage (about 8 meg max).

If there is a lot of free space in your 30Gb table then it's possible
that the problem is simply vacuum's data structures that keep track
of free space. What exactly are you using as the process memory limit,
and can you increase it?

FWIW, the default vacuum method for 7.2 is designed to use a fixed
amount of memory no matter how large the table. That won't help you
much today, however.

regards, tom lane

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Robert B. Easter 2001-08-24 01:27:54 JDBC patch (attempt#2) for util.Serialize and jdbc2.PreparedStatement
Previous Message pgsql-bugs 2001-08-24 00:17:51 Bug #424: JDBC driver security issue.