On 2008-11-30, at 13:31, Greg Stark wrote:
> On Fri, Nov 28, 2008 at 8:08 PM, Grzegorz Jaskiewicz
> <gj(at)pointblue(dot)com(dot)pl> wrote:
>> work_mem = 128140 # min 64, size in KB
>> maintenance_work_mem = 65535 # min 1024, size in KB
> Incidentally it's very odd to see maintenance_work_mem less than
> work_mem. work_mem is used by regular queries and maintenance_work_mem
> by things like create index.
>> AggContext: 1981800448 total in 250 blocks; 3768 free (117
>> 1981796680 used
>> TupleHashTable: 1158668312 total in 150 blocks; 465512 free
>> chunks); 1158202800 used
> Er, yeah. This is a known problem and it would be really nice to fix
> it. However nobody's come up with any bright ideas for fixing it and
> disabling hash aggregates doesn't seem worth it. The problem only
> occurs when Postgres dramaticalyl misestimates the number of hash
> buckets needed for hash aggregates.
> Can you post the EXPLAIN for your query? you might have to analyze
> your tables again, and possibly raise the stats target for the columns
> you're aggregating.
> Failing that you could set enable_hashaggregate to false for this
I already worked around this by setting work_mem to very low value,
since that query was suppose to be run only once anyway (one time
migration of schema).
Good to know that someone actually was able to analyze it, without
telling me 'man you gotta rewrite that query'.
to be honest, I thought postgresql will never fail that way, and is
able to use divide work into smaller pieces, or swap it on disc, in
case something like that happens.
Thanks again Greg.
In response to
pgsql-bugs by date
|Next:||From: JOHN_MCY||Date: 2008-12-01 10:48:55|
|Subject: Group by in DIVISION|
|Previous:||From: Greg Stark||Date: 2008-11-30 13:31:04|
|Subject: Re: query failed, not enough memory on 8.3.5|