During the testing that I did when moving from pg7 to pg8 a few years back, I didn't notice any particular performance
increase on a similarly-configured server.
That is, we've got 14 disks (15k rpm) striped in a single RAID10 array. Moving the logs to an internal RAID
versus leaving them on the "main" storage array didn't impact my performance noticeably either way.
Now, please note:
SUCH A STATEMENT DEPENDS ENTIRELY ON YOUR USE-CASE SCENARIO.
That is, because these tests which I performed showed that for me, if you're using PG in a much different
manner, you may have different results. It's also quite possible that our 50% expansion in the past 3
years has had some effect, but I'm not in a place to retest that at this time.
We specifically chose to put our logs on the fiber SAN in case the underlying machine went down.
Disaster recovery for that box would therefore be:
a) New machine with O/S and pg installed.
b) Mount SAN
c) Start PG. Everything (including logs) is available to you.
It is, in essence, our "externally-stored PG data" in its entirety.
On the 10k vs 15k rpm disks, there's a _lot_ to be said about that. I don't want to start a flame war here,
but 15k versus 10k rpm hard drives does NOT equivocate to a 50% increase in read/write times, to say
the VERY least.
"Average seek time" is the time it takes for the head to move from random place A to
random place B on the drive. The rotational latency of a drive can be easily calculated.
A 15k drive rotates roughly 250 times per second, or 4 msec per rotation versus a 10k
drive which is about 167 rotations per sec or 6 sec per rotation.
This would mean that the rotational latency of a 15k drive adds 2msec and a 10k drive adds 3msec.
So, your true seek time is the "average seek time" of the drive + the rotation listed above.
So, if your average latency is something REALLY good (say 4msec) for each of the drives, your 15k
drive would have 6msec real-world IOPS of around 166, and your 10k drive would have 143. In that
particular case, at a very low level, you'd be getting about a 14% improvement.
HOWEVER, we're not talking about a single drive, here. We're talking about a RAID10 of 12
drives (6 + 6 mirror, I assume) versus 24 drives (12+12 mirror I assume). In that case,
the max IOPS of your first RAID would be around 1000 while the max IOPS of your second RAID
with the "slower" drives would be around 1700.
Hope this helps.
I _really_ don't want to start a war with this. If you're confused how I got these
numbers, please contact me directly.
----- "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Thu, Apr 29, 2010 at 11:26 AM, Anj Adu <fotographs(at)gmail(dot)com> wrote:
> > All the disks are usually laid out in a single RAID 10 stripe . There
> > are no dedicated disks for the OS/WAL as storage is a premium
> You should at least investigate the performance difference of having a
> separate volume for WAL files on your system. Since WAL files are
> mostly sequential, and db access is generally random, the WAL files
> can run really quickly on a volume that does nothing else but handle
> WAL writes sequentially. Given the volume you're handling, I would
> expect that storage is not any more important than performance.
> The fact that you're asking whether to go with 12 or 24 600G disks
> shows that you're willing to give up a little storage for performance.
> I would bet that the 24 10k disks with one pair dedicated for OS /
> pg_xlog would be noticeably faster than any single large volume config
> you'd care to test, especially for lots of active connections.
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
pgsql-admin by date
|Next:||From: Josi Perez (3T Systems)||Date: 2010-04-29 20:19:50|
|Subject: Re: register a service in Windows|
|Previous:||From: Scott Marlowe||Date: 2010-04-29 19:45:24|
|Subject: Re: more 10K disks or less 15K disks|