Re: HAVING push-down

From: "Simon Riggs" <simon(at)2ndquadrant(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: HAVING push-down
Date: 2007-01-26 16:21:04
Message-ID: 1169828465.3772.376.camel@silverbirch.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, 2007-01-26 at 11:16 -0500, Tom Lane wrote:
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> > I've just read a paper that says PostgreSQL doesn't do this.
>
> What does he mean by that exactly, and which PG version is he looking
> at? As Greg notes, we do know how to push down non-aggregated
> conditions, but I'm not sure that's what he's thinking of.

Yes, it was specifically non-aggregated conditions.

> There have
> been some relevant bug fixes, eg
>
> 2004-07-10 14:39 tgl
>
> * src/backend/executor/: nodeAgg.c (REL7_4_STABLE), nodeAgg.c: Test
> HAVING condition before computing targetlist of an Aggregate node.
> This is required by SQL spec to avoid failures in cases like
> SELECT sum(win)/sum(lose) FROM ... GROUP BY ... HAVING sum(lose) >
> 0; AFAICT we have gotten this wrong since day one. Kudos to Holger
> Jakobs for being the first to notice.
>
> Also, it's still true that we run all the aggregate transition functions
> in parallel, so if you were hoping to use HAVING on an aggregate
> condition to prevent an overflow or something in the state accumulation
> function for a targetlist aggregate, you'd lose. But I don't see any
> way to avoid that without scanning the data twice, which we're surely
> not gonna do.

I'll send you the paper off-line, there's some more interesting stuff
also. p.12

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2007-01-26 16:23:31 Re: [HACKERS] pg_dump pretty_print
Previous Message Chris Browne 2007-01-26 16:19:24 Re: Proposal: Change of pg_trigger.tg_enabled and adding