Iain wrote:
> As bytepile has it, failure of 1 disk in 0+1 leaves you with just RAID 0
> so one more failure on the other pair and your data is gone. On the
> other hand, failure of 1 disk in raid 10 leaves you with a working raid
> 1 that can sustain a second failure.
What they're saying is in the case of (AsB) m (CsD) -- if A fails, they
no longer count B as part of the array and no longer part of the
possible drives that can fail. Sorta like the "no one hears a tree fall,
did it fall" scenario.
I personally disagree with that theory. B is still part of the array.
Pop in a new drive and the array is ready to start resync (CsD) -->
(AsB). You still have a 1/3 chance in surviving another drive failure as
long as B is the one that dies.
Although now that I think about it, RAID10 is more resillient because
the odds are survival after 1 failure is 2/3. In the case of (AmB) s
(CmD), if A fails, you can survive C failing or D failing.