"Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> How much memory would it save during VACUUM on a 1 billion row table
> with 200 million dead rows? Would that reduce the number of cycles a
> normal non-interrupted VACUUM would perform?
It would depend on how many dead tuples you have per-page. If you have a very
large table with only one dead tuple per page then it might even be larger.
But in the usual case it would be smaller.
Also note that it would have to be non-lossy.
My only objection to this idea, and it's not really an objection at all, is
that I think we want to head in the direction of making indexes cheaper to
scan and doing the index scan phase more often. That reduces the need for
multiple concurrent vacuums and makes the problem of busy tables getting
starved less of a concern.
That doesn't mean there's any downside to making the dead tuple list take less
memory but I think the upside is limited. As we optimize our index
representations with GII and bitmapped indexes scanning them gets easier and
easier anyways. And you don't really want to wait too long before you get the
benefit of the recovered space in the table.
In response to
pgsql-hackers by date
|Next:||From: Zoltan Boszormenyi||Date: 2007-02-28 13:25:14|
|Subject: Re: psql problem querying relations|
|Previous:||From: Simon Riggs||Date: 2007-02-28 13:10:28|
|Subject: Major Feature Interactions|