What Scott said ... seconded, all of it.
I'm running one 500GB database on a 64-bit, 8GB VMware virtual machine, with
2 vcores, PG 8.3.9 with shared_buffers set to 2GB, and it works great.
However, it's a modest workload, most of the database is archival for data
mining, and the "working set" for routine OLTP is pretty modest and easily
fits in the 2GB, and it's back-ended on to a pretty decent EMC Clariion
FibreChannel array. Not the typical case.
For physical x86 servers, brand name (e.g. Kingston) ECC memory is down to
$25 per GB in 4GB DIMMs, and $36 per GB in 8GB DIMMs .... dollars to
doughnuts you have a server somewhere with 2GB or 4GB parts that can be
pulled and replaced with double the density, et voila, an extra 16GB of RAM
for about $500.
Lots and lots of RAM is absolutely, positively a no-brainer when trying to
make a DB go fast. If for no other reason than people get all starry eyed at
GHz numbers, almost all computers tend to be CPU heavy and RAM light in
their factory configs. I build a new little server for the house every 3-5
years, using desktop parts, and give it a mid-life upgrade with bigger
drives and doubling the RAM density.
Big banks running huge Oracle OLTP setups use the strategy of essentially
keeping the whole thing in RAM .... HP shifts a lot of Superdome's maxed out
with 2TB of RAM into this market - and that RAM costs a lot more than $25 a
In response to
pgsql-performance by date
|Next:||From: Matthew Wakeling||Date: 2010-03-25 10:35:28|
|Subject: Re: memory question|
|Previous:||From: Robert Haas||Date: 2010-03-25 02:16:59|
|Subject: Re: Forcing index scan on query produces 16x faster|