Enough RAM for entire Database.. cost aside, is this going to be fastest?

From: "Andy B" <abhousehuntRE-M--O--V-E(at)blueyonder(dot)co(dot)uk>
To: pgsql-general(at)postgresql(dot)org
Subject: Enough RAM for entire Database.. cost aside, is this going to be fastest?
Date: 2004-06-30 14:44:18
Message-ID: zlAEc.3401$vC4.613@fe1.news.blueyonder.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello,

Sorry for this newbish question.

Briefly, my problem:
------------------
I expect the database I'm working on to reach something in the order of
12-16 Gigabytes, and I am interested in understanding as much as I can about
how I can make this go as fast as possible on a linux system. I haven't run
such a large database before. The nature of the database is such that
successive queries are very likely to lead to poor cache performance.

I have lots of experience with disks, operating systems, caching, virtual
memory, RAM etc. - just none running gargantuan databases very quickly!
-------------------

I've read all the performance tuning and configuration stuff I can, but
there is one basic question I can't get an answer to:

My question:
--------------------
If I can afford to have the entire database residing in RAM (within the
postgresql shared buffer cache, swapped into real memory) without impacting
other services on the machine, is there any reason why I shouldn't do it,
other than cost? (!)
--------------------

Basically, I'm finding it almost impossible to predict 'how much RAM is
right'. I know I don't need the *entire* database to sit in RAM, and a lot
of this answer depends on a lot of things - the speed of IO, the nature of
queries etc. But when you get to a certain amount of RAM, (specifically, the
amount where nothing needs to be swapped out), then surely things get a bit
more certain... or do they?

So, could I, for example, setup postgresql with a 16 GB shared buffer cache
and expect the postgresql backend processes to fly like the wind (CPU, RAM
and disk write speed permitting)?

I understand that writes can delay the progression of updates if setup in a
certain way, and that's ok - I'm really just wondering if there are some
*other* boundaries that will get in the way. I've read that I should be able
to configure a linux box (with the right processors) to address up to 64GB
of RAM, and I'm aware of more esoteric boxes like the SGI Altix 3000 which
can go far higher, but maybe that's overkill..

If there are any resources out there that point to other experiences of
others trying to coerce a huge database to run largely from RAM, I'd be
grateful for the links.

Many thanks
Andy
____

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2004-06-30 14:52:06 Re: Slow dump with pg_dump/pg_restore ? How to improve ?
Previous Message Bruno Wolff III 2004-06-30 14:35:41 Re: DML Restriction unless through a function