| From: | Vlad Arkhipov <arhipov(at)dc(dot)baikal(dot)ru> | 
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> | 
| Cc: | pgsql-performance(at)postgresql(dot)org | 
| Subject: | Re: Nested loop vs merge join: inconsistencies between estimated and actual time | 
| Date: | 2008-03-11 01:52:12 | 
| Message-ID: | 47D5E5CC.7020403@dc.baikal.ru | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
Tom Lane writes:
> Vlad Arkhipov <arhipov(at)dc(dot)baikal(dot)ru> writes:
>   
>> I've came across this issue while writing report-like query for 2 not
>> very large tables. I've tried several methods to resolve this one (see
>> below). But now I'm really stuck...
>>     
>
> It looks like you are wishing to optimize for all-in-memory situations,
> in which case the traditional advice is to reduce random_page_cost to
> something close to 1.  AFAICS all the rowcount estimates you're seeing
> are spot on, or as close to spot on as you could realistically hope for,
> and so the problem lies with the cost parameters.  Fooling with the
> statistics is not going to help if the rowcount estimates are already
> good.
>   
I tried to change random_page_cost to 1.1 or something close to it and 
increase/decrease effective_cache_size. But Postgres always prefer plan 
with merge join.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Robins Tharakan | 2008-03-11 02:57:05 | Re: count * performance issue | 
| Previous Message | Miguel Arroz | 2008-03-11 00:33:58 | Re: UPDATE 66k rows too slow |