Skip site navigation (1) Skip section navigation (2)

Re: Huge Data sets, simple queries

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Jeffrey W(dot) Baker" <jwbaker(at)acm(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Huge Data sets, simple queries
Date: 2006-01-28 18:55:08
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
I wrote:
> (We might need to tweak the planner to discourage selecting
> HashAggregate in the presence of DISTINCT aggregates --- I don't
> remember whether it accounts for the sortmem usage in deciding
> whether the hash will fit in memory or not ...)

Ah, I take that all back after checking the code: we don't use
HashAggregate at all when there are DISTINCT aggregates, precisely
because of this memory-blow-out problem.

For both your group-by-date query and the original group-by-month query,
the plan of attack is going to be to read the original input in grouping
order (either via sort or indexscan, with sorting probably preferred
unless the table is pretty well correlated with the index) and then
sort/uniq on the DISTINCT value within each group.  The OP is probably
losing on that step compared to your test because it's over much larger
groups than yours, forcing some spill to disk.  And most likely he's not
got an index on month, so the first sort is in fact a sort and not an

Bottom line is that he's probably doing a ton of on-disk sorting
where you're not doing any.  This makes me think Luke's theory about
inadequate disk horsepower may be on the money.

			regards, tom lane

In response to

pgsql-performance by date

Next:From: hubert depesz lubaczewskiDate: 2006-01-29 11:25:23
Subject: Re: Huge Data sets, simple queries
Previous:From: Tom LaneDate: 2006-01-28 17:37:00
Subject: Re: Huge Data sets, simple queries

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group