From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: 8K recordsize bad on ZFS? |
Date: | 2010-05-12 01:01:48 |
Message-ID: | 4BE9FDFC.40401@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Sure, but bulk load + reandom selects is going to *guarentee*
> fragmentatioon on a COW system (like ZFS, BTRFS, etc) as the selects
> start to write out all the hint-bit-dirtied blocks in random orders...
>
> i.e. it doesn't take long to make an originally nicely continuous block
> random....
I'm testing with DD and Bonnie++, though, which create their own files.
For that matter, running an ETL procedure with a newly created database
on both recordsizes was notably (2.5x) faster on the 128K system.
So I don't think fragmentation is the difference.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2010-05-12 01:04:53 | Re: Performance issues when the number of records are around 10 Million |
Previous Message | Shrirang Chitnis | 2010-05-11 21:52:09 | Re: Performance issues when the number of records are around 10 Million |