Skip site navigation (1) Skip section navigation (2)

Re: 8K recordsize bad on ZFS?

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: 8K recordsize bad on ZFS?
Date: 2010-05-10 21:16:46
Message-ID: 4BE877BE.8000908@agliodbs.com (view raw or flat)
Thread:
Lists: pgsql-performance
> That still is consistent with it being caused by the files being
> discontiguous. Copying them moved all the blocks to be contiguous and
> sequential on disk and might have had the same effect even if you had
> left the settings at 8kB blocks. You described it as "overloading the
> array/drives with commands" which is probably accurate but sounds less
> exotic if you say "the files were fragmented causing lots of seeks so
> our drives we saturated the drives' iops capacity". How many iops were
> you doing before and after anyways?

Don't know.  This was a client system and once we got the target
numbers, they stopped wanting me to run tests on in.  :-(

Note that this was a brand-new system, so there wasn't much time for
fragmentation to occur.

-- 
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

In response to

Responses

pgsql-performance by date

Next:From: Carlo StonebanksDate: 2010-05-11 05:32:28
Subject: Function scan/Index scan to nested loop
Previous:From: Greg StarkDate: 2010-05-10 20:01:03
Subject: Re: 8K recordsize bad on ZFS?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group