On Wed, Mar 10, 2010 at 6:29 AM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> Then I increased vacuum_defer_cleanup_age to 100000, which represents
> about 5 minutes of transactions on the test system. This eliminated all
> query cancels for the reporting query, which takes an average of 10s.
> Next is a database bloat test, but I'll need to do that on a system with
> more free space than my laptop.
Note that this will be heavily dependent on the use case. If you have
one of those counter records that keeps being updated and gets cleaned
up by HOT whenever the page fills up then you need to allow HOT to
clean it up before it overflows the page or else it'll bloat the table
and require a real vacuum. I think that means that a
vacuum_defer_cleanup of up to about 100 or so (it depends on the width
of your counter record) might be reasonable as a general suggestion
but anything higher will depend on understanding the specific system.
Another use case that might suprise people who are accustomed to the
current behaviour is massive updates. This is the main really pessimal
use case left in Postgres -- ideally they wouldn't bloat the table at
all but currently they double the size of the table. People may be
accustomed to the idea that they can then run vacuum and that will
limit the bloat to 50%, assuming they have no (other) long-lived
transactions. With vacuum_defer_cleanup that will no longer be true.
It will be as if you always have a query lasting n transactions in
your system at all times.
In response to
pgsql-hackers by date
|Next:||From: hubert depesz lubaczewski||Date: 2010-03-10 11:38:36|
|Subject: Re: Dyamic updates of NEW with pl/pgsql|
|Previous:||From: Kevin Flanagan||Date: 2010-03-10 10:03:58|
|Subject: Re: Access violation from palloc, Visual Studio 2005, C-language function |