Skip site navigation (1) Skip section navigation (2)

work_mem / maintenance_work_mem maximums

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: pgsql-hackers(at)postgresql(dot)org
Subject: work_mem / maintenance_work_mem maximums
Date: 2010-09-20 16:51:11
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers

  After watching a database import go abysmally slow on a pretty beefy
  box with tons of RAM, I got annoyed and went to hunt down why in the
  world PG wasn't using but a bit of memory.  Turns out to be a well
  known and long-standing issue:

  Now, we could start by fixing guc.c to correctly have the max value
  for these be MaxAllocSize/1024, for starters, then at least our users
  would know when they set a higher value it's not going to be used.
  That, in my mind, is a pretty clear bug fix.  Of course, that doesn't
  help us poor data-warehousing bastards with 64G+ machines.

  Sooo..  I don't know much about what the limit is or why it's there,
  but based on the comments, I'm wondering if we could just move the
  limit to a more 'sane' place than the-function-we-use-to-allocate.  If
  we need a hard limit due to TOAST, let's put it there, but I'm hopeful
  we could work out a way to get rid of this limit in repalloc and that
  we can let sorts and the like (uh, index creation) use what memory the
  user has decided it should be able to.




pgsql-hackers by date

Next:From: Robert HaasDate: 2010-09-20 16:57:56
Subject: Re: bg worker: general purpose requirements
Previous:From: Magnus HaganderDate: 2010-09-20 16:49:28
Subject: Git conversion status

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group