From: | Erik Jones <erik(at)myemma(dot)com> |
---|---|
To: | henk de wit <henk53602(at)hotmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Huge amount of memory consumed during transaction |
Date: | 2007-10-12 21:39:26 |
Message-ID: | EAC95D94-3479-494E-B107-F8E366E8517C@myemma.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Oct 12, 2007, at 4:09 PM, henk de wit wrote:
> > It looks to me like you have work_mem set optimistically large. This
> > query seems to be doing *many* large sorts and hashes:
>
> I have work_mem set to 256MB. Reading in PG documentation I now
> realize that "several sort or hash operations might be running in
> parallel". So this is most likely the problem, although I don't
> really understand why memory never seems to increase for any of the
> other queries (not executed in a transaction). Some of these are at
> least the size of the query that is giving problems.
Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.
>
> Btw, is there some way to determine up front how many sort or hash
> operations will be running in parallel for a given query?
Explain is your friend in that respect.
Erik Jones
Software Developer | Emma®
erik(at)myemma(dot)com
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
From | Date | Subject | |
---|---|---|---|
Next Message | henk de wit | 2007-10-12 21:40:54 | Re: How to speed up min/max(id) in 50M rows table? |
Previous Message | henk de wit | 2007-10-12 21:19:09 | Re: How to speed up min/max(id) in 50M rows table? |