From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> |
Cc: | Robert Dyas <rdyas(at)adelphia(dot)net>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: query optimization scenarios 17,701 times faster!!! |
Date: | 2003-04-25 00:30:40 |
Message-ID: | 3833.1051230640@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> writes:
> The particular conversion you're doing for that query requires more than
> just that AFAICS. The outerness of the join makes it so you know there
> won't be 0 rows after the join, and the distinct cancels out the
> possibility of multiple rows. Both of those conditions seem necessary to
> make the conversion valid as well.
Yeah. I think the optimization rule being proposed here is something
like "if a table is not used in the SELECT output list, *and* it's on
the nullable side of an outer join (so that it can't cause rows to be
omitted from the output), *and* there's a DISTINCT (so that generating
multiple matched rows isn't interesting either) then don't bother to
include the table in the join tree". But I'm not convinced that's
a safe transformation --- are there any other conditions that would
have to be checked? And of course the $64 question is whether the
combination would hit often enough to make it worth the cycles to check
for. It seems like a fairly unusual case to me.
The variant where you know there are not multiple rows because you're
joining on a unique column, rather than depending on DISTINCT, might
be more useful in practice ... but I'm still not convinced it's worth
checking for.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2003-04-25 01:20:03 | Re: [HACKERS] close() vs. closesocket() |
Previous Message | Rod Taylor | 2003-04-24 23:14:02 | Re: Are we losing momentum? |