From: | Rod Taylor <rbt(at)rbt(dot)ca> |
---|---|
To: | Nikk Anderson <Nikk(dot)Anderson(at)parallel(dot)ltd(dot)uk> |
Cc: | "\"'Charles H(dot) \"Woloszynski'" <chw(at)clearmetrix(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: selects from large tables |
Date: | 2002-11-20 15:31:05 |
Message-ID: | 1037806265.87360.18.camel@jester |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 2002-11-20 at 10:08, Nikk Anderson wrote:
> Hi,
>
> I tried a test cluster on a copy of our real data - all 10 million
> rows or so. WOW! The normal select performance improved
> drastically.
>
> Selecting 3 months worth of data was taking 146 seconds to retrieve.
> After clustering it took 7.7 seconds! We are now looking into ways we
> can automate clustering to keep the table up to date. The cluster
> itself took around 2.5 hours.
>
> As our backend systems are writing hundreds of rows of data in per
> minute into the table that needs clustering - will cluster handle
> locking the tables when dropping the old, and renaming the clustered
> data? What happens to the data being added to the table while cluster
> is running? Our backend systems may have some problems if the table
> does not exist when it tries to insert, and we don't want to lose any
> data.
The table will be locked while cluster is running. Meaning, any new
data will have to sit and wait.
Cluster won't buy much on a mostly clustered table. But it's probably
worth it for you to do it when 20% of the tuples turnover (deleted,
updated, inserts, etc).
I'm a little curious to know when the last time you had run a VACUUM
FULL on that table was.
--
Rod Taylor <rbt(at)rbt(dot)ca>
From | Date | Subject | |
---|---|---|---|
Next Message | Nikk Anderson | 2002-11-20 15:49:26 | Re: selects from large tables |
Previous Message | Bruce Momjian | 2002-11-20 15:18:16 | Re: selects from large tables |