Re: Fusion-io ioDrive

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: "Jeffrey Baker" <jwbaker(at)gmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Fusion-io ioDrive
Date: 2008-07-07 13:08:08
Message-ID: b42b73150807070608vf04cea7t8b6cb7e171d48f82@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <jwbaker(at)gmail(dot)com> wrote:
>> Service Time Percentile, millis
>> R/W TPS R-O TPS 50th 80th 90th 95th
>> RAID 182 673 18 32 42 64
>> Fusion 971 4792 8 9 10 11
>
> Someone asked for bonnie++ output:
>
> Block output: 495MB/s, 81% CPU
> Block input: 676MB/s, 93% CPU
> Block rewrite: 262MB/s, 59% CPU
>
> Pretty respectable. In the same ballpark as an HP MSA70 + P800 with
> 25 spindles.

You left off the 'seeks' portion of the bonnie++ results -- this is
actually the most important portion of the test. Based on your tps
#s, I'm expecting seeks equiv of about 10 10k drives in configured in
a raid 10, or around 1000-1500. They didn't publish any prices so
it's hard to say if this is 'cost competitive'.

These numbers are indeed fantastic, disruptive even. If I was testing
the device for consideration in high duty server environments, I would
be doing durability testing right now...I would slamming the database
with transactions (fsync on, etc) and then power off the device. I
would do this several times...making sure the software layer isn't
doing some mojo that is technically cheating.

I'm not particularly enamored of having a storage device be stuck
directly in a pci slot -- although I understand it's probably
necessary in the short term as flash changes all the rules and you
can't expect it to run well using mainstream hardware raid
controllers. By using their own device they have complete control of
the i/o stack up to the o/s driver level.

I've been thinking for a while now that flash is getting ready to
explode into use in server environments. The outstanding questions I
see are:
*) is write endurance problem truly solved (giving at least a 5-10
year lifetime)
*) what are the true odds of catastrophic device failure (industry
claims less, we'll see)
*) is the flash random write problem going to be solved in hardware or
specialized solid state write caching techniques. At least
currently, it seems like software is filling the role.
*) do the software solutions really work (unproven)
*) when are the major hardware vendors going to get involved. they
make a lot of money selling disks and supporting hardware (san, etc).

merlin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2008-07-07 13:23:43 Re: Fusion-io ioDrive
Previous Message Bill Moran 2008-07-07 11:15:46 Re: How much work_mem to configure...