On Aug 17, 2011, at 4:16 PM, Greg Smith wrote:
> On 08/17/2011 02:26 PM, Ogden wrote:
>> I am using bonnie++ to benchmark our current Postgres system (on RAID 5) with the new one we have, which I have configured with RAID 10. The drives are the same (SAS 15K). I tried the new system with ext3 and then XFS but the results seem really outrageous as compared to the current system, or am I reading things wrong?
>> The benchmark results are here:
> Congratulations--you're now qualified to be a member of the "RAID5 sucks" club. You can find other members at http://www.miracleas.com/BAARF/BAARF2.html Reasonable read speeds and just terrible write ones are expected if that's on your old hardware. Your new results are what I would expect from the hardware you've described.
> The only thing that looks weird are your ext4 "Sequential Output - Block" results. They should be between the ext3 and the XFS results, not far lower than either. Normally this only comes from using a bad set of mount options. With a battery-backed write cache, you'd want to use "nobarrier" for example; if you didn't do that, that can crush output rates.
Isn't this very dangerous? I have the Dell PERC H700 card - I see that it has 512Mb Cache. Is this the same thing and good enough to switch to nobarrier? Just worried if a sudden power shut down, then data can be lost on this option.
I did not do that with XFS and it did quite well - I know it's up to my app and more testing, but in your experience, what is usually a good filesystem to use? I keep reading conflicting things..
In response to
pgsql-performance by date
|Next:||From: Jim Nasby||Date: 2011-08-18 04:31:18|
|Subject: Re: heavy load-high cpu itilization|
|Previous:||From: Greg Smith||Date: 2011-08-18 01:08:08|
|Subject: Re: Raid 5 vs Raid 10 Benchmarks Using bonnie++|