Re: How much work_mem to configure...

From: Bill Moran <wmoran(at)collaborativefusion(dot)com>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: "Jessica Richard" <rjessil(at)yahoo(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: How much work_mem to configure...
Date: 2008-07-07 11:15:46
Message-ID: 20080707071546.28d25ead.wmoran@collaborativefusion.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

In response to "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>:

> On Sat, Jul 5, 2008 at 5:24 AM, Jessica Richard <rjessil(at)yahoo(dot)com> wrote:
> > How can I tell if my work_mem configuration is enough to support all
> > Postgres user activities on the server I am managing?
> >
> > Where do I find the indication if the number is lower than needed.
>
> You kinda have to do some math with fudge factors involved. As
> work_mem gets smaller, sorts spill over to disk and get slower, and
> hash_aggregate joins get avoided because they need to fit into memory.
>
> As you increase work_mem, sorts can start happening in memory (or with
> less disk writing) and larger and larger sets can have hash_agg joins
> performed on them because they can fit in memory.
>
> But there's a dark side to work_mem being too large, and that is that
> you can run your machine out of free memory with too many large sorts
> happening, and then the machine will slow to a crawl as it swaps out
> the very thing you're trying to do in memory.
>
> So, I tend to plan for about 1/4 of memory used for shared_buffers,
> and up to 1/4 used for sorts so there's plenty of head room and the OS
> to cache files, which is also important for performance. If you plan
> on having 20 users accessing the database at once, then you figure
> each one might on average run a query with 2 sorts, and that you'll be
> using a maximum of 20*2*work_mem for those sorts etc...
>
> If it's set to 8M, then you'd get approximately 320 Meg max used by
> all the sorts flying at the same time. You can see why high work_mem
> and high max_connections settings together can be dangerous. and why
> pooling connections to limit the possibility of such a thing is useful
> too.
>
> Generally it's a good idea to keep it in the 4 to 16 meg range on most
> machines to prevent serious issues, but if you're going to allow 100s
> of connections at once, then you need to look at limiting it based on
> how much memory your server has.

I do have one thing to add: if you're using 8.3, there's a log_temp_files
config variable that you can use to monitor when your sorts spill over
onto disk. It doesn't change anything that Scott said, it simply gives
you another way to monitor what's happening and thus have better
information to tune by.

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran(at)collaborativefusion(dot)com
Phone: 412-422-3463x4023

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2008-07-07 13:08:08 Re: Fusion-io ioDrive
Previous Message Greg Smith 2008-07-06 15:18:24 Re: filesystem options for WAL