On Tue, Sep 01, 2009 at 07:42:56PM -0400, Tom Lane wrote:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
> > On Wed, Sep 2, 2009 at 12:01 AM, Alvaro
> > Herrera<alvherre(at)commandprompt(dot)com> wrote:
> >>> The use cases where VACUUM FULL wins currently are where storing two
> >>> copies of the table and its indexes concurrently just isn't practical.
> >> Yeah, but then do you really need to use VACUUM FULL? If that's really
> >> a problem then there ain't that many dead tuples around.
> > That's what I want to believe. But picture if you have, say a
> > 1-terabyte table which is 50% dead tuples and you don't have a spare
> > 1-terabytes to rewrite the whole table.
> But trying to VACUUM FULL that table is going to be horridly painful
> too, and you'll still have bloated indexes afterwards. You might as
> well just live with the 50% waste, especially since if you did a
> full-table update once you'll probably do it again sometime.
> I'm having a hard time believing that VACUUM FULL really has any
> interesting use-case anymore.
I have a client who uses temp tables heavily, hundreds of thousands of creates
and drops per day. They also have long running queries. The only thing that
keeps catalog bloat somewhat in check is vacuum full on bloated catalogs
a few times a day. Without that pg_class, pg_attribute etc quickly balloon to
thousands of pages.
David Gould daveg(at)sonic(dot)net 510 536 1443 510 282 0869
If simplicity worked, the world would be overrun with insects.
In response to
pgsql-hackers by date
|Next:||From: Andrew Dunstan||Date: 2009-09-03 23:57:25|
|Subject: Re: remove flatfiles.c|
|Previous:||From: Simon Riggs||Date: 2009-09-03 21:50:45|
|Subject: Re: Hot Standby, max_connections andmax_prepared_transactions|