Possible failure & recovery

Possible failure & recovery

am 10.08.2009 20:08:00 von dmiller

I'm not 100% certain about this...but maybe.

I had setup a small box as a remote backup for our company. I THOUGHT I
had set it up as a Raid-10 - but I can't swear to it now. I just had a
need to try to recover a file from that backup - only to find we just
had an error.

Checking mdadm.conf, I find -
ARRAY /dev/.static/dev/md0 level=raid10 num-devices=4
devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd
UUID=7ec24ccc:973f5065:a79315d0:449291b3 auto=part

Now, I do know that one of the drives had failed previously (sdd), and
the array has been operating in degraded condition for some time. Now
it appears that a second drive failed. I received XFS errors and only
two drives showed under /proc/mdstat (sdb was removed as well as sdd).

xfs_check reported errors. Whether or not it was a good idea, I tried
adding sdb back to the array. It worked and started rebuilding. Then I
noticed that the array was reporting as "raid6". I don't know when it
BECAME raid6, if I always had it as such or if the raid-10 somehow
degraded and became raid-6. If it actually did so - that might make for
some type of a migration/expansion path for a raid-10 array that needs
to grow.

My xfs_repair -L /dev/md0 process is currently running...I'm holding my
breath to see how much I get back...
--
Daniel

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html