Memory Allocation

From: "Ryan Hansen" <ryan(dot)hansen(at)brightbuilders(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Memory Allocation
Date: 2008-11-26 22:09:55
Message-ID: 011101c95013$b6fca0a0$24f5e1e0$@hansen@brightbuilders.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hey all,

This may be more of a Linux question than a PG question, but I'm wondering
if any of you have successfully allocated more than 8 GB of memory to PG
before.

I have a fairly robust server running Ubuntu Hardy Heron, 24 GB of memory,
and I've tried to commit half the memory to PG's shared buffer, but it seems
to fail. I'm setting the kernel shared memory accordingly using sysctl,
which seems to work fine, but when I set the shared buffer in PG and restart
the service, it fails if it's above about 8 GB. I actually have it
currently set at 6 GB.

I don't have the exact failure message handy, but I can certainly get it if
that helps. Mostly I'm just looking to know if there's any general reason
why it would fail, some inherent kernel or db limitation that I'm unaware
of.

If it matters, this DB is going to be hosting and processing hundreds of GB
and eventually TB of data, it's a heavy read-write system, not transactional
processing, just a lot of data file parsing (python/bash) and bulk loading.
Obviously the disks get hit pretty hard already, so I want to make the most
of the large amount of available memory wherever possible. So I'm trying to
tune in that direction.

Any info is appreciated.

Thanks!

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alan Hodgson 2008-11-26 22:18:12 Re: Memory Allocation
Previous Message Richard Huxton 2008-11-26 20:33:13 Re: Increasing pattern index query speed