Re: Huge amount of memory consumed during transaction

From: henk de wit <henk53602(at)hotmail(dot)com>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Huge amount of memory consumed during transaction
Date: 2007-10-12 21:09:35
Message-ID: BAY124-W27B12597CAD62CB7C05790F5A00@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> It looks to me like you have work_mem set optimistically large. This
> query seems to be doing *many* large sorts and hashes:

I
have work_mem set to 256MB. Reading in PG documentation I now realize
that "several sort or hash operations might be running in parallel". So
this is most likely the problem, although I don't really understand why
memory never seems to increase for any of the other queries (not
executed in a transaction). Some of these are at least the size of the
query that is giving problems.

Btw, is there some way to determine up front how many sort or hash operations will be running in parallel for a given query?

Regards

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message henk de wit 2007-10-12 21:19:09 Re: How to speed up min/max(id) in 50M rows table?
Previous Message henk de wit 2007-10-12 20:59:52 Re: How to speed up min/max(id) in 50M rows table?