From: | Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Greg Stark <gsstark(at)mit(dot)edu>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: wip: functions median and percentile |
Date: | 2010-10-11 17:42:10 |
Message-ID: | AANLkTintCTk8r0jRM0rvY8h4FO=QHVNhXArKE6CL6AqQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-rrreviewers |
On 11 October 2010 18:37, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Dean Rasheed <dean(dot)a(dot)rasheed(at)gmail(dot)com> writes:
>> The estimate of 200 x 8K is below work_mem, so it uses a hash
>> aggregate. In reality, each tuplesort allocates around 30K initially,
>> so it very quickly uses over 1GB. A better estimate for the aggregate
>> wouldn't improve this situation much.
>
> Sure it would: an estimate of 30K would keep the planner from using
> hash aggregation.
>
Not if work_mem was 10MB.
Regards,
Dean
From | Date | Subject | |
---|---|---|---|
Next Message | Craig James | 2010-10-11 17:46:17 | Re: Slow count(*) again... |
Previous Message | Tom Lane | 2010-10-11 17:37:11 | Re: wip: functions median and percentile |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-10-11 17:48:12 | Re: wip: functions median and percentile |
Previous Message | Tom Lane | 2010-10-11 17:37:11 | Re: wip: functions median and percentile |