| From: | Mike Harding <mvh(at)ix(dot)netcom(dot)com> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Hash aggregates blowing out memory |
| Date: | 2005-02-25 22:04:14 |
| Message-ID: | 1109369054.86993.17.camel@bsd.mvh |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Any way to adjust n_distinct to be more accurate?
I don't think a 'disk spill' would be that bad, if you could re-sort the
hash in place. If nothing else, if it could -fail- when it reaches the
lower stratosphere, and re-start, it's faster than getting no result at
all... sort of an auto disable of the hashagg.
On Fri, 2005-02-25 at 16:55 -0500, Tom Lane wrote:
> Mike Harding <mvh(at)ix(dot)netcom(dot)com> writes:
> > I've been having problems where a HashAggregate is used because of a bad
> > estimate of the distinct number of elements involved.
>
> If you're desperate, there's always enable_hashagg. Or reduce sort_mem
> enough so that even the misestimate looks like it will exceed sort_mem.
>
> In the long run it would be nice if HashAgg could spill to disk. We
> were expecting to see a contribution of code along that line last year
> (from the CMU/Berkeley database class) but it never showed up. The
> performance implications might be a bit grim anyway :-(
>
> regards, tom lane
--
Mike Harding <mvh(at)ix(dot)netcom(dot)com>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2005-02-25 22:08:22 | Re: Hash aggregates blowing out memory |
| Previous Message | Tom Lane | 2005-02-25 21:55:42 | Re: Hash aggregates blowing out memory |