Alan Stange wrote:
> Another data point.
> We had some down time on our system today to complete some maintenance
> work. It took the opportunity to rebuild the 700GB file system using
> XFS instead of Reiser.
> One iostat output for 30 seconds is
> avg-cpu: %user %nice %sys %iowait %idle
> 1.58 0.00 19.69 31.94 46.78
> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
> sdd 343.73 175035.73 277.55 5251072 8326
> while doing a select count(1) on the same large table as before.
> Subsequent iostat output all showed that this data rate was being
> maintained. The system is otherwise mostly idle during this measurement.
> The sequential read rate is 175MB/s. The system is the same as earlier,
> one cpu is idle and the second is ~40% busy doing the scan and ~60%
> idle. This is postgresql 8.1rc1, 32KB block size. No tuning except
> for using a 1024KB read ahead.
> The peak speed of the attached storage is 200MB/s (a 2Gb/s fiber channel
> controller). I see no reason why this configuration wouldn't generate
> higher IO rates if a faster IO connection were available.
> Can you explain again why you think there's an IO ceiling of 120MB/s
> because I really don't understand?
I think what is going on here is that Luke's observation of the 120 Mb/s
rate is taken from data using 8K block size - it looks like we can get
higher rates with 32K.
A quick test on my P3 system seems to support this (the numbers are a
bit feeble, but the difference is interesting):
The test is SELECT 1 FROM table, stopping Pg and unmounting the file
system after each test.
25 s elapsed
48 % idle from vmstat (dual cpu system)
70 % busy from gstat (Freebsd GEOM io monitor)
181819 pages in relation
56 Mb/s effective IO throughput
23 s elapsed
44 % idle from vmstat
80 % busy from gstat
45249 pages in relation
60 Mb/s effective IO throughput
I re-ran these several times - very repeatable (+/- 0.25 seconds).
This is Freebsd 6.0 with the readahead set to 16 blocks, UFS2 filesystem
created with 32K blocksize (both cases). It might be interesting to see
the effect of using 16K (the default) with the 8K Pg block size, I would
expect this to widen the gap.
In response to
pgsql-performance by date
|Next:||From: Dave Cramer||Date: 2005-11-21 01:24:24|
|Subject: Re: Hyperthreading slows processes?|
|Previous:||From: Craig A. James||Date: 2005-11-20 21:46:10|
|Subject: Hyperthreading slows processes?|