From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Joe Conway <mail(at)joeconway(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Speeding up aggregates |
Date: | 2002-12-09 10:16:01 |
Message-ID: | 1039428960.7415.3.camel@huli |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sun, 2002-12-08 at 19:31, Joe Conway wrote:
> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
> i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
> QUERY PLAN
...
> Total runtime: 3282.27 msec
> (10 rows)
>
>
> Note that similar to Josh, I saw a nice improvement when using the
> HashAggregate on the simpler case, but as soon as I added a HAVING clause the
> optimizer switched back to GroupAggregate.
>
> I'll try to play around with this a bit more later today.
Try turning the having into subquery + where:
explain analyze
select * from (
select i.part_id, sum(w.qty_oh) as total_oh
from inv i, iwhs w
where i.part_id = w.part_id
group by i.part_id) sub
where total_oh > 0;
--
Hannu Krosing <hannu(at)tm(dot)ee>
From | Date | Subject | |
---|---|---|---|
Next Message | Hubert depesz Lubaczewski | 2002-12-09 11:32:31 | questions about disk configurations |
Previous Message | Kalle Barck-Holst | 2002-12-09 06:30:48 | is insertion and movement times are correlated to the size of the database? |