On Sat, 2005-12-17 at 21:10 -0800, James Klo wrote:
> I need to routinely move data from the timeblock table to an archive
> table with the same schema named timeblock_archive. I really need this
> to happen as quickly as possible, as the archive operation appears to
> really tax the db server...
Have you considered partitioning?
If you can partition your timeblock table so that you archive an entire
partition at a time, then you can delete the archived rows by just
dropping (or truncating) that partition. AFAIK there's no way to
"re-parent" a partition (e.g., from the timeblock table to the
If your app is particularly cooperative you might be able to use
partitioning to avoid moving data around entirely. If table accesses
are always qualified by something you can use as a partitioning key,
then partitioning can give you the speed benefits of a small table
without the effort of keeping it cleared out.
Another good read, if you haven't yet, is
especially the "Memory", "Checkpoints", and maybe "WAL options"
sections. If you're doing large deletes then you may need to increase
your free space map settings--if a VACUUM VERBOSE finishes by saying
that you need more FSM pages, then the table may have gotten bloated
over time (which can be fixed with a configuration change and a VACUUM
FULL, though this will lock everything else out of the table while it's
In response to
pgsql-performance by date
|Next:||From: Jignesh K. Shah||Date: 2005-12-19 14:27:12|
|Subject: Re: PostgreSQL and Ultrasparc T1|
|Previous:||From: James Klo||Date: 2005-12-19 08:17:06|
|Subject: Re: make bulk deletes faster?|