Re: degenerate performance on one server of 3

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Erik Aronesty <erik(at)q32(dot)com>
Cc: reid(dot)thompson(at)ateb(dot)com, pgsql-performance(at)postgresql(dot)org
Subject: Re: degenerate performance on one server of 3
Date: 2009-06-04 13:16:23
Message-ID: 603c8f070906040616k48a873c6o5bd744fb90108951@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty <erik(at)q32(dot)com> wrote:
> Seems like "VACUUM FULL" could figure out to do that too depending on
> the bloat-to-table-size ratio ...
>
>   - copy all rows to new table
>   - lock for a millisecond while renaming tables
>   - drop old table.

You'd have to lock the table at least against write operations during
the copy; otherwise concurrent changes might be lost.

AIUI, this is pretty much what CLUSTER does, and I've heard that it
works as well or better as VACUUM FULL for bloat reclamation.
However, it's apparently still pessimal:
http://archives.postgresql.org/pgsql-hackers/2008-08/msg01371.php (I
had never heard this word before Greg Stark used it in this email, but
it's a great turn of phrase, so I'm reusing it.)

> Locking a whole table for a very long time is scary for admins.

Agreed. It would be nice if we had some kind of "incremental full"
vacuum that would run for long enough to reclaim a certain number of
pages and then exit. Then you could clean up this kind of problem
incrementally instead of in one shot. It would be even nicer if the
lock strength could be reduced, but I'm guessing that's not easy to do
or someone would have already done it by now. I haven't read the code
myself.

...Robert

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2009-06-04 16:04:12 Re: Query plan issues - volatile tables
Previous Message Erik Aronesty 2009-06-04 11:31:44 Re: degenerate performance on one server of 3