From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 8K recordsize bad on ZFS? |
Date: | 2010-05-10 21:16:46 |
Message-ID: | 4BE877BE.8000908@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> That still is consistent with it being caused by the files being
> discontiguous. Copying them moved all the blocks to be contiguous and
> sequential on disk and might have had the same effect even if you had
> left the settings at 8kB blocks. You described it as "overloading the
> array/drives with commands" which is probably accurate but sounds less
> exotic if you say "the files were fragmented causing lots of seeks so
> our drives we saturated the drives' iops capacity". How many iops were
> you doing before and after anyways?
Don't know. This was a client system and once we got the target
numbers, they stopped wanting me to run tests on in. :-(
Note that this was a brand-new system, so there wasn't much time for
fragmentation to occur.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2010-05-11 05:32:28 | Function scan/Index scan to nested loop |
Previous Message | Greg Stark | 2010-05-10 20:01:03 | Re: 8K recordsize bad on ZFS? |