Re: Have vacuum emit a warning when it runs out of maintenance_work_mem

From: "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>
To: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Cc: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>, Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>, pgsql-patches(at)postgresql(dot)org, Guillaume Smet <guillaume(dot)smet(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Have vacuum emit a warning when it runs out of maintenance_work_mem
Date: 2007-05-13 22:14:01
Message-ID: 20070513221401.GA69517@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches

On Sun, May 13, 2007 at 11:19:07AM +0100, Heikki Linnakangas wrote:
> Jim C. Nasby wrote:
> >On Sat, May 12, 2007 at 07:57:44PM +0100, Heikki Linnakangas wrote:
> >>Or we could switch to a more compact representation of the dead tuples,
> >>and not need such a big maintenance_work_mem in the first place.
> >
> >Sure, but even with a more compact representation you can still run out
> >of maintenance_work_mem... unless we allow this to spill to disk. At
> >first guess that sounds insane, but if you've got a large enough set of
> >indexes it *might* actually be faster.
>
> It would only make sense if the table is clustered on an index, so that
> you'd in practice only need to keep part of the array in memory at a
> time. It's pretty narrow use case, not worth spending time on I think.

There might be ways to get around that. For example, instead of testing
every index entry one at a time, you could read in several pages of
index entries, sort the entries based on ctid, and then use that to do
the lookups. Might be worth looking at one of these days...
--
Jim Nasby decibel(at)decibel(dot)org
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)

In response to

Browse pgsql-patches by date

  From Date Subject
Next Message Jim C. Nasby 2007-05-13 22:22:12 Re: Concurrent psql patch
Previous Message David Fetter 2007-05-13 21:15:26 OS/X startup scripts