Re: postmaster consuming /lots/ of memory with hash aggregate. why?

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>, pgsql-performance(at)postgresql(dot)org
Subject: Re: postmaster consuming /lots/ of memory with hash aggregate. why?
Date: 2010-11-24 03:11:18
Message-ID: AANLkTiktPOkSTXk7frMVjOdUd8ojGhdUT0aWB4PH9ak5@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Nov 12, 2010 at 11:12 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> wrote:
> if I remember well, you can set a number of group by ALTER TABLE ALTER
> COLUMN SET n_distinct = ..
>
> maybe you use it.

I'm not sure where the number 40,000 is coming from either, but I
think Pavel's suggestion is a good one. If you're grouping on a
column with N distinct values, then it stands to reason there will be
N groups, and the planner is known to estimate n_distinct on large
tables, even with very high statistics targets, which is why 9.0
allows a manual override.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Vitalii Tymchyshyn 2010-11-24 08:58:16 Re: Performance under contention
Previous Message Ivan Voras 2010-11-24 01:09:32 Re: Performance under contention