Re: New hashed IN code ignores distinctiveness of subquery

From: Bradley Baetz <bbaetz(at)acm(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: New hashed IN code ignores distinctiveness of subquery
Date: 2003-01-27 01:07:58
Message-ID: 20030127010758.GA2806@mango.home
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-hackers

On Sun, Jan 26, 2003 at 06:51:18PM -0500, Tom Lane wrote:
> Bradley Baetz <bbaetz(at)acm(dot)org> writes:
> > On Sun, Jan 26, 2003 at 02:09:49PM -0500, Tom Lane wrote:
> >> This isn't really anything to do with the new IN code, but is a
> >> long-standing problem: cost_mergejoin doesn't apply any penalty factor
>
> > Hmm. I'm not sure that that is the entire story. For the non-DISTINCT
> > version, the top of the tree has an estimated cost of 3485675.30, whilst
> > for the DISTINCT case, the estimated cost is 3033.30.
>
> But the DISTINCT case is a different query. The question that's really
> at hand is why does the planner mistakenly prefer merge to hash join in
> the non-DISTINCT case.

Well, its a different input, but the output is the same, so it may as
well be the same query. IN (1,1,1) is the same as IN (1).

As I mentioned, I think the code prefers a merge join because the
calculations assume that all of the 50000 rows returned from the
subquery are unique. If that were the case, then a mergejoin probably
would be correct. Given that most of the values are the same, however,
its quicker to just do a hash probe for each entry.

This probably ties into the 'long-standing problem' you mentioned
earlier, but while fixing that would possibly increase the cost to the
stage where the hashjoin is picked instead, it wouldn't fix the fact
that the hashjoin cost is way too high to start with.

>
> > If the planner
> > tried to treat ... IN (SELECT foo ...) as ... IN (SELECT DISTINCT foo
> > ...) then presumably the second plan would have been chosen.
>
> You are confusing cost estimation with number-of-resulting-tuples
> estimation. They are different things.

I mentioned them both, because they are related, although not
necessarily for the bug(s) here. IN (SELECT foo) will return more
tuples, but its semantically identical to the IN (SELECT DISTINCT foo)
form. For a subselect with a high selectivity, its better to filter
later on, but where the result will have lots of repeated tuples, it may
be better to try to filter first, which reduces teh number of tuples to
join, and possibly use an index_scan to get the list instead of a
seqscan.

> Because a Hash node doesn't return any tuples in the normal fashion;
> it's only called to build a hash table, which is then probed by the
> HashJoin node. (No, I don't know why the Berkeley boys designed it that
> way, and not as one combined plan node. Maybe they had visions of using
> Hash for other things.)

Well, in this case the hash node could prune duplicates as they went
into the table :). Not sure if that would be a win, though.

> > While I'm going through the stats, the number of rows for a JOIN_IN
> > merge is wrong - its currently |outer_rel->rows * selec;|, but should
> > probably be |outer_rel->rows * selec * inner_rel->num_distinct_rows;| -
> > notice the estimated-vs-actual for the final join, of 5580 vs 50000.
>
> Not sure about that. The selectivity would need to be calculated
> differently if we based the calculation on that. It's probably bogus
> as-is (note the comments) but I haven't had time to think about it.
> The major reason why this equation is stated the way it is is that given
> selectivity ranging from 0 to 1, multiplying by outer rows only yields
> the correct range of possible results (0 to number of outer rows).
> Multiplying by num_distinct_rows could yield an impossible result.

I don't follow that. We have 50000 outer rows, with a selectivity of say
0.1 (to make the maths nicer). That gives the roughly 5000 rows printed
above. However, thats not the whole story. The result of the join will
be 5000 rows _for each (unique) tuple in the subselect result_. For the
10 distinct values I have, that gives 50000 total rows, which is
correct. (I'm not familar with the postgres source to know if that
approach will work with histograms; I presume theres a helper func
somewhere to handle merging that stuff)

Can you give an example where an impossible result would be given?

>
> > IOW, should the JOIN_IN and JOIN_REVERSE_IN be using the same
> > cost calcs as JOIN_INNER_UNIQUE and JOIN_OUTER_UNIQUE in
> > set_joinrel_size_estimates ?
>
> Maybe; from a mechanical point of view, the existing code does the right
> thing given 0-to-1 selectivity estimates in each case.
>

See above. What is the purpose of the _UNIQUE forms, anyway?

> regards, tom lane

Thanks,

Bradley

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2003-01-27 02:43:18 Re: New hashed IN code ignores distinctiveness of subquery
Previous Message Tom Lane 2003-01-26 23:51:18 Re: New hashed IN code ignores distinctiveness of subquery

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuo Ishii 2003-01-27 01:12:49 Re: Sorting Chinese data in Postgresql 7.3.1
Previous Message Dave Cramer 2003-01-27 01:01:28 Request for qualified column names