Skip site navigation (1) Skip section navigation (2)

Re: General performance questions about postgres on Apple

From: Sean Shanny <shannyconsulting(at)earthlink(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: General performance questions about postgres on Apple
Date: 2004-02-23 02:48:54
Message-ID: 40396A16.9040301@earthlink.net (view raw or flat)
Thread:
Lists: pgsql-performance
Tom,

We have the following setting for random page cost:

random_page_cost = 1            # units are one sequential page fetch cost

Any suggestions on what to bump it up to?

We are waiting to hear back from Apple on the speed issues, so far we 
are not impressed with the hardware in helping in the IO department.  
Our DB is about 263GB with indexes now so there is not way it is going 
to fit into memory. :-(  I have taken the step of breaking out the data 
into month based groups just to keep the table sizes down.  Our current 
months table has around 72 million rows in it as of today.  The joys of 
building a data warehouse and trying to make it as fast as possible.

Thanks.

--sean



Tom Lane wrote:

>Sean Shanny <shannyconsulting(at)earthlink(dot)net> writes:
>  
>
>>New results with the above changes: (Rather a huge improvement!!!)  
>>Thanks Scott.  I will next attempt to make the cpu_* changes to see if 
>>it the picks the correct plan.
>>    
>>
>
>  
>
>>explain analyze  SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT 
>>OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;
>>                                                                  QUERY PLAN
>>----------------------------------------------------------------------------------------------------------------------------------------------
>> Hash Left Join  (cost=1669281.60..3204008.48 rows=480082 width=149) 
>>(actual time=157221.125..-412311.378 rows=502347 loops=1)
>>   Hash Cond: ("outer".md5 = "inner".referral_md5)
>>   ->  Seq Scan on referral_temp t2  (cost=0.00..16034.81 rows=480081 
>>width=145) (actual time=11.537..1852.336 rows=502347 loops=1)
>>   ->  Hash  (cost=1356358.48..1356358.48 rows=30344048 width=40) 
>>(actual time=157187.530..157187.530 rows=0 loops=1)
>>         ->  Seq Scan on d_referral t1  (cost=0.00..1356358.48 
>>rows=30344048 width=40) (actual time=14.134..115048.285 rows=27908024 
>>loops=1)
>> Total runtime: 212595.909 ms
>>(6 rows)
>>    
>>
>
>It seems like the planner is overestimating the cost of a seqscan
>relative to indexed access.  Note that the above large seqscan is priced
>at 1356358.48 cost units vs 115048.285 actual msec, which says that a
>sequential page fetch is taking about 0.1 msec on your hardware.
>(You should check the actual size of d_referral to verify this, though.)
>The other plan made it look like an indexed fetch was costing several
>milliseconds.  You may have a situation where you need to raise
>random_page_cost, rather than lowering it as people more often do.
>
>What are you using for random_page_cost anyway?  It doesn't look like
>you are at the default.
>
>This also suggests that the performance issue with your RAID array
>has to do with seek time rather than transfer bandwidth...
>
>			regards, tom lane
>
>  
>

In response to

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2004-02-23 03:24:29
Subject: Re: General performance questions about postgres on Apple
Previous:From: Josh BerkusDate: 2004-02-23 00:17:27
Subject: Re: Column correlation drifts, index ignored again

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group