Re: Big delete on big table... now what?

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Fernando Hevia" <fhevia(at)ip-tel(dot)com(dot)ar>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Big delete on big table... now what?
Date: 2008-08-22 21:30:59
Message-ID: 48AEE9C3.EE98.0025.0@wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

>>> "Fernando Hevia" <fhevia(at)ip-tel(dot)com(dot)ar> wrote:

> I have a table with over 30 million rows. Performance was dropping
steadily
> so I moved old data not needed online to an historic table. Now the
table
> has about 14 million rows. I don't need the disk space returned to
the OS
> but I do need to improve performance. Will a plain vacuum do or is a
vacuum
> full necessary?
> *Would a vacuum full improve performance at all?

If this database can be out of production for long enough to run it
(possibly a few hours, depending on hardware, configuration, table
width, indexes) your best option might be to CLUSTER and ANALYZE the
table. It gets more complicated if you can't tolerate down-time.

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Bill Moran 2008-08-22 22:44:07 Re: Big delete on big table... now what?
Previous Message Fernando Hevia 2008-08-22 20:36:44 Big delete on big table... now what?