High volume inserts - more disks or more CPUs?

From: "Guy Rouillier" <guyr(at)masergy(dot)com>
To: "PostgreSQL General" <pgsql-general(at)postgresql(dot)org>
Subject: High volume inserts - more disks or more CPUs?
Date: 2004-12-13 06:16:43
Message-ID: CC1CF380F4D70844B01D45982E671B2348E496@mtxexch01.add0.masergy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Seeking advice on system configuration (and I have read the techdocs.)
We are converting a data collection system from Oracle to PostgreSQL
8.0. We are currently getting about 64 million rows per month; data is
put into a new table each month. The number of simultaneous connections
is very small: one that does all these inserts, and < 5 others that
read.

We trying to identify a server for this. Options are a 4-way Opteron
with 4 SCSI disks, or a 2-way Opteron with 6 SCSI disks. The 4-CPU box
currently has 16 GB of memory and the 2-CPU 4 GB, but we can move that
memory around as necessary.

(1) Would we be better off with more CPUs and fewer disks or fewer CPUs
and more disks?

(2) The techdocs suggest starting with 10% of available memory for
shared buffers, which would be 1.6 GB on the 4-way. But I've seen posts
here saying that anything more than 10,000 shared buffers (80 MB)
provides little or no improvement. Where should we start?

(3) If we go with more disks, should we attempt to split tables and
indexes onto different drives (i.e., tablespaces), or just put all the
disks in hardware RAID5 and use a single tablespace?

I appreciate all suggestions.

--
Guy Rouillier

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Shridhar Daithankar 2004-12-13 06:52:07 Re: Spanning tables
Previous Message Greg Stark 2004-12-13 06:07:10 Re: disabling OIDs?