From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Greg Smith <gsmith(at)gregsmith(dot)com> |
Cc: | Ow Mun Heng <ow(dot)mun(dot)heng(at)wdc(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? |
Date: | 2009-10-21 06:25:29 |
Message-ID: | dcc563d10910202325q363fdbc2u3249cd76ff162d63@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 21, 2009 at 12:10 AM, Greg Smith <gsmith(at)gregsmith(dot)com> wrote:
> On Tue, 20 Oct 2009, Ow Mun Heng wrote:
>
>> Raid10 is supposed to be able to withstand up to 2 drive failures if the
>> failures are from different sides of the mirror. Right now, I'm not sure
>> which drive belongs to which. How do I determine that? Does it depend on the
>> output of /prod/mdstat and in that order?
>
> You build a 4-disk RAID10 array on Linux by first building two RAID1 pairs,
> then striping both of the resulting /dev/mdX devices together via RAID0.
Actually, later models of linux have a direct RAID-10 level built in.
I haven't used it. Not sure how it would look in /proc/mdstat either.
> You'll actually have 3 /dev/mdX devices around as a result. I suspect
> you're trying to execute mdadm operations on the outer RAID0, when what you
> actually should be doing is fixing the bottom-level RAID1 volumes.
> Unfortunately I'm not too optimistic about your case though, because if you
> had a repairable situation you technically shouldn't have lost the array in
> the first place--it should still be running, just in degraded mode on both
> underlying RAID1 halves.
Exactly. Sounds like both drives in a pair failed.
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2009-10-21 06:30:35 | Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? |
Previous Message | Greg Smith | 2009-10-21 06:10:01 | Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? |