Skip site navigation (1) Skip section navigation (2)

Re: Questions on query planner, join types, and work_mem

From: Andres Freund <andres(at)anarazel(dot)de>
To: Peter Hussey <peter(at)labkey(dot)com>
Cc: pgsql-performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Questions on query planner, join types, and work_mem
Date: 2010-07-27 23:57:41
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

On Tue, Jul 27, 2010 at 04:08:16PM -0700, Peter Hussey wrote:
> Now for the questions:
> 1)  If we tell the customer to set his work_mem value to 500MB or 1GB in
> postgres.config, what problems might they see?  the documentation and the
> guidelines we received from Rupinder Singh in support suggest a much lower
> value, e.g. a max work_mem of 10MB.  Other documentation such as the "Guide
> to Posting Slow Query Questions" suggest at least testing up to 1GB.  What
> is a reasonable maximum to configure for all connnections?
Well. That depends on the amount of expected concurrency and available
memory. Obviously you can set it way much higher in an OLAPish, low
concurrency setting than in an OLTP environment.

That setting is significantly complex to estimate in my opinion. For
one the actualy usage depends on the complexity of the queries, for
another to be halfway safe you have to use avail_mem/(max_connections
* max_nodes_of_most_complex_query). Which is often a very pessimistic
and unusably low estimate.

> 2) How is work_mem used by a query execution?  For example, does each hash
> table in an execution get allocated a full work_mem's worth of memory ?   Is
> this memory released when the query is finished, or does it stay attached to
> the connection or some other object?
Each Node of the query can use one work_mem worth of data (sometimes a
bit more). The memory is released after the query finished (or
possibly earlier, dependent of the structure of the query).
The specific allocation pattern and implementation details (of malloc)
influence how and when that memory is actually returned to the os.

> 3) is there a reason why the planner doesn't seem to recognize the condition
> when the hash table won't fit in the current work_mem, and choose a
> low-memory plan instead?
Hard to say without more information. Bad estimates maybe? Best show
your query plan (EXPLAIN ANALYZE), the table definition and some
details about common hardware (i.e. whether it has 1GB of memory or


In response to

pgsql-performance by date

Next:From: Tom LaneDate: 2010-07-28 00:05:02
Subject: Re: Questions on query planner, join types, and work_mem
Previous:From: Peter HusseyDate: 2010-07-27 23:08:16
Subject: Questions on query planner, join types, and work_mem

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group