On Sun, May 13, 2007 at 11:19:07AM +0100, Heikki Linnakangas wrote:
> Jim C. Nasby wrote:
> >On Sat, May 12, 2007 at 07:57:44PM +0100, Heikki Linnakangas wrote:
> >>Or we could switch to a more compact representation of the dead tuples,
> >>and not need such a big maintenance_work_mem in the first place.
> >Sure, but even with a more compact representation you can still run out
> >of maintenance_work_mem... unless we allow this to spill to disk. At
> >first guess that sounds insane, but if you've got a large enough set of
> >indexes it *might* actually be faster.
> It would only make sense if the table is clustered on an index, so that
> you'd in practice only need to keep part of the array in memory at a
> time. It's pretty narrow use case, not worth spending time on I think.
There might be ways to get around that. For example, instead of testing
every index entry one at a time, you could read in several pages of
index entries, sort the entries based on ctid, and then use that to do
the lookups. Might be worth looking at one of these days...
Jim Nasby decibel(at)decibel(dot)org
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
In response to
pgsql-patches by date
|Next:||From: Jim C. Nasby||Date: 2007-05-13 22:22:12|
|Subject: Re: Concurrent psql patch|
|Previous:||From: David Fetter||Date: 2007-05-13 21:15:26|
|Subject: OS/X startup scripts|