On Sat, 2005-01-22 at 12:41 -0600, Bruno Wolff III wrote:
> On Sat, Jan 22, 2005 at 12:13:00 +0900,
> Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> wrote:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
> You are probably vacuuming too often. You want to wait until a significant
> fraction of a large table is dead tuples before doing a vacuum. If you are
> scanning a large table and only marking a few tuples as deleted, you aren't
> getting much bang for your buck.
The big problem occurs when you have a small set of hot tuples within a
large table. In the time it takes to vacuum a table with 200M tuples
one can update a small subset of that table many many times.
Some special purpose vacuum which can target hot spots would be great,
but I've always assumed this would come in the form of table
partitioning and the ability to vacuum different partitions
independently of each-other.
In response to
pgsql-performance by date
|Next:||From: Jim C. Nasby||Date: 2005-01-22 20:10:49|
|Subject: Re: PostgreSQL clustering VS MySQL clustering|
|Previous:||From: Chris Travers||Date: 2005-01-22 18:41:52|
|Subject: Re: [PERFORM] DWH on Postgresql|