Re: Speeding up aggregates

From: Joe Conway <mail(at)joeconway(dot)com>
To: Hannu Krosing <hannu(at)tm(dot)ee>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Speeding up aggregates
Date: 2002-12-09 21:04:18
Message-ID: 3DF50552.8030603@joeconway.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hannu Krosing wrote:
> On Sun, 2002-12-08 at 19:31, Joe Conway wrote:
>
>
>>parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv
>>i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;
>> QUERY PLAN
>
> ...
>
>> Total runtime: 3282.27 msec
>>(10 rows)
>>
>>
>>Note that similar to Josh, I saw a nice improvement when using the
>>HashAggregate on the simpler case, but as soon as I added a HAVING clause the
>>optimizer switched back to GroupAggregate.
>>
>>I'll try to play around with this a bit more later today.
>
>
> Try turning the having into subquery + where:
>
> explain analyze
> select * from (
> select i.part_id, sum(w.qty_oh) as total_oh
> from inv i, iwhs w
> where i.part_id = w.part_id
> group by i.part_id) sub
> where total_oh > 0;
>

Pretty much the same result. See below.

Joe

======================================
parts=# set sort_mem to 8000;
SET
parts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as
total_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id)
sub where total_oh > 0;
QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------
Subquery Scan sub (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2779.16..3212.46 rows=4189 loops=1)
-> GroupAggregate (cost=11111.93..12015.10 rows=35528 width=36) (actual
time=2779.15..3202.97 rows=4189 loops=1)
Filter: (sum(qty_oh) > 0::double precision)
-> Sort (cost=11111.93..11293.31 rows=72553 width=36) (actual
time=2778.90..2878.33 rows=72548 loops=1)
Sort Key: i.part_id
-> Hash Join (cost=1319.10..5254.45 rows=72553 width=36)
(actual time=155.80..1235.32 rows=72548 loops=1)
Hash Cond: ("outer".part_id = "inner".part_id)
-> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..282.38 rows=72553 loops=1)
-> Hash (cost=1230.28..1230.28 rows=35528 width=14)
(actual time=155.56..155.56 rows=0 loops=1)
-> Seq Scan on inv i (cost=0.00..1230.28
rows=35528 width=14) (actual time=0.02..86.69 rows=35528 loops=1)
Total runtime: 3232.84 msec
(11 rows)

parts=# set sort_mem to 12000;
SET
parts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as
total_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id)
sub where total_oh > 0;
QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------
Subquery Scan sub (cost=5617.22..5794.86 rows=35528 width=36) (actual
time=1439.24..1565.47 rows=4189 loops=1)
-> HashAggregate (cost=5617.22..5794.86 rows=35528 width=36) (actual
time=1439.23..1555.65 rows=4189 loops=1)
Filter: (sum(qty_oh) > 0::double precision)
-> Hash Join (cost=1319.10..5073.07 rows=72553 width=36) (actual
time=159.39..1098.30 rows=72548 loops=1)
Hash Cond: ("outer".part_id = "inner".part_id)
-> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553
width=22) (actual time=0.01..259.48 rows=72553 loops=1)
-> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual
time=159.11..159.11 rows=0 loops=1)
-> Seq Scan on inv i (cost=0.00..1230.28 rows=35528
width=14) (actual time=0.03..87.74 rows=35528 loops=1)
Total runtime: 1609.91 msec
(9 rows)

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2002-12-09 21:26:32 Re: Speeding up aggregates
Previous Message Ron Johnson 2002-12-09 20:46:40 Re: questions about disk configurations