From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | Matthew Wakeling <matthew(at)flymine(dot)org> |
Cc: | "pgsql-performance\(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Need help with 8.4 Performance Testing |
Date: | 2008-12-10 00:54:37 |
Message-ID: | 87y6yohmsy.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Matthew Wakeling <matthew(at)flymine(dot)org> writes:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we're looking at in the next few years where
>> I work.
>
> There's a problem with that thinking. That is, in order to exercise many
> spindles, you will need to have just as many (if not more) concurrent requests.
> And if you have many concurrent requests, then you can spread them over
> multiple CPUs. So it's more a case of "How many hard drives PER CPU". It also
> becomes a matter of whether Postgres can scale that well.
Well:
$ units
2445 units, 71 prefixes, 33 nonlinear units
You have: 8192 byte/5ms
You want: MB/s
* 1.6384
/ 0.61035156
At 1.6MB/s per drive if find Postgres is cpu-bound doing sequential scans at
1GB/s you'll need about 640 drives to keep one cpu satisfied doing random I/O
-- assuming you have perfect read-ahead and the read-ahead itself doesn't add
cpu overhead. Both of which are false of course, but at least in theory that's
what it'll take.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's On-Demand Production Tuning
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2008-12-10 01:37:41 | Re: Need help with 8.4 Performance Testing |
Previous Message | Gregory Stark | 2008-12-10 00:45:24 | Re: Need help with 8.4 Performance Testing |