Re: bad plan: 8.4.8, hashagg, work_mem=1MB.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: bad plan: 8.4.8, hashagg, work_mem=1MB.
Date: 2011-06-20 16:08:19
Message-ID: 6961.1308586099@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Jon Nelson <jnelson+pgsql(at)jamponi(dot)net> writes:
> I ran a query recently where the result was very large. The outer-most
> part of the query looked like this:

> HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)
> -> Result (cost=0.00..50842760.97 rows=2417500797 width=30)

> The row count for 'Result' is in the right ballpark, but why does
> HashAggregate think that it can turn 2 *billion* rows of strings (an
> average of 30 bytes long) into only 200?

200 is the default assumption about number of groups when it's unable to
make any statistics-based estimate. You haven't shown us any details so
it's hard to say more than that.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jesper Krogh 2011-06-20 18:58:58 Re: sequential scan unduly favored over text search gin index
Previous Message Vladimir Kulev 2011-06-20 16:08:00 Re: Inoptimal query plan for max() and multicolumn index