Re: merge join killing performance

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Matthew Wakeling <matthew(at)flymine(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: merge join killing performance
Date: 2010-05-19 20:47:06
Message-ID: AANLkTin18NoamzGnXHAo5VK6bmDe104ncq2gvsp7u0ug@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

On Wed, May 19, 2010 at 2:27 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Wed, May 19, 2010 at 10:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Matthew Wakeling <matthew(at)flymine(dot)org> writes:
>>> On Tue, 18 May 2010, Scott Marlowe wrote:
>>>> Aggregate  (cost=902.41..902.42 rows=1 width=4)
>>>>     ->  Merge Join  (cost=869.97..902.40 rows=1 width=4)
>>>>         Merge Cond: (f.eid = ev.eid)
>>>>         ->  Index Scan using files_eid_idx on files f
>>>>         (cost=0.00..157830.39 rows=3769434 width=8)
>>
>>> Okay, that's weird. How is the cost of the merge join only 902, when the
>>> cost of one of the branches 157830, when there is no LIMIT?
>>
>> It's apparently estimating (wrongly) that the merge join won't have to
>> scan very much of "files" before it can stop because it finds an eid
>> value larger than any eid in the other table.  So the issue here is an
>> inexact stats value for the max eid.
>
> I changed stats target to 1000 for that field and still get the bad plan.

And of course ran analyze across the table...

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2010-05-19 21:02:07 Re: pg_upgrade docs
Previous Message Bruce Momjian 2010-05-19 20:40:47 Re: pg_upgrade docs

Browse pgsql-performance by date

  From Date Subject
Next Message Matthew Wakeling 2010-05-20 01:46:29 Re: merge join killing performance
Previous Message Scott Marlowe 2010-05-19 20:27:05 Re: merge join killing performance