Most of our Solaris users have been, I think, following Jignesh's advice
from his benchmark tests to set ZFS page size to 8K for the data zpool.
However, I've discovered that this is sometimes a serious problem for
For example, having the recordsize set to 8K on a Sun 4170 with 8 drives
recently gave me these appalling Bonnie++ results:
Version 1.96 ------Sequential Output------ --Sequential Input-
Concurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
db111 24G 260044 33 62110 17 89914 15
Latency 6549ms 4882ms 3395ms
I know that's hard to read. What it's saying is:
Seq Writes: 260mb/s combined
Seq Reads: 89mb/s combined
Read Latency: 3.3s
Best guess is that this is a result of overloading the array/drives with
commands for all those small blocks; certainly the behavior observed
(stuttering I/O, latency) is in line with that issue.
Anyway, since this is a DW-like workload, we just bumped the recordsize
up to 128K and the performance issues went away ... reads up over 300mb/s.
-- Josh Berkus
PostgreSQL Experts Inc.
pgsql-performance by date
|Next:||From: thilo||Date: 2010-05-08 11:39:58|
|Subject: Slow Bulk Delete|
|Previous:||From: Craig James||Date: 2010-05-07 21:11:02|
|Subject: Dell Perc HX00 RAID controllers: What's inside?|