From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Scott Carey <scott(at)richrelevance(dot)com> |
Cc: | Craig James <craig_james(at)emolecules(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Block at a time ... |
Date: | 2010-03-22 21:25:07 |
Message-ID: | 407d949e1003221425i5ed0dd75x9203780cff236701@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Mar 22, 2010 at 6:47 PM, Scott Carey <scott(at)richrelevance(dot)com> wrote:
> Its fairly easy to break. Just do a parallel import with say, 16 concurrent tables being written to at once. Result? Fragmented tables.
>
Fwiw I did do some investigation about this at one point and could not
demonstrate any significant fragmentation. But that was on Linux --
different filesystem implementations would have different success
rates. And there could be other factors as well such as how full the
fileystem is or how old it is.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2010-03-22 22:19:11 | Re: default_statistics_target |
Previous Message | Dave Crooke | 2010-03-22 21:06:00 | Re: Block at a time ... |