From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Sean Shanny <shannyconsulting(at)earthlink(dot)net> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: General performance questions about postgres on Apple |
Date: | 2004-02-23 16:25:13 |
Message-ID: | Pine.LNX.4.33.0402230923250.28821-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sun, 22 Feb 2004, Sean Shanny wrote:
> Tom,
>
> We have the following setting for random page cost:
>
> random_page_cost = 1 # units are one sequential page fetch cost
>
> Any suggestions on what to bump it up to?
>
> We are waiting to hear back from Apple on the speed issues, so far we
> are not impressed with the hardware in helping in the IO department.
> Our DB is about 263GB with indexes now so there is not way it is going
> to fit into memory. :-( I have taken the step of breaking out the data
> into month based groups just to keep the table sizes down. Our current
> months table has around 72 million rows in it as of today. The joys of
> building a data warehouse and trying to make it as fast as possible.
You may be able to achieve similar benefits with a clustered index.
see cluster:
\h cluster
Command: CLUSTER
Description: cluster a table according to an index
Syntax:
CLUSTER indexname ON tablename
CLUSTER tablename
CLUSTER
I've found this can greatly increase speed, but on 263 gigs of data, I'd
run it when you had a couple days free. You might wanna test it on a
smaller test set you can afford to chew up some I/O CPU time on over a
weekend.
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Shanny | 2004-02-23 16:50:50 | Re: General performance questions about postgres on Apple |
Previous Message | Steve Atkins | 2004-02-23 16:07:34 | Re: Slow join using network address function |