Failed RAID 6 array advice

Failed RAID 6 array advice

am 02.03.2011 06:05:33 von jahammonds prost

I've just had a 3rd drive fail on one of my RAID 6 arrays, and I'm look=
ing for=20
some advice on how to get it back enough that I can=A0recover the data,=
and then=20
replacing the other failed drives.


mdadm -V
mdadm - v3.0.3 - 22nd October 2009


Not the most up to date release, but it seems to be the latest one avai=
lable on=20
=46C12



The /etc/mdadm.conf file is

ARRAY /dev/md0 uuid=3D1470c671:4236b155:67287625:899db153


Which explains why I didn't get emailed about the drive failures. This =
isn't my=20
standard file, and I don't know how it was changed, but that's another =
issue for=20
another day.



mdadm --detail /dev/md0
/dev/md0:
      =A0 Version : 1.2
=A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
     Raid Level : raid6
=A0 Used Dev Size : 488383488 (465.76 GiB 500.10 GB)
   Raid Devices : 15
=A0 Total Devices : 12
  =A0 Persistence : Superblock is persistent
  =A0 Update Time : Tue Mar=A0 1 22:17:41 2011
        =A0 State : active, degraded, Not Started
=A0Active Devices : 12
Working Devices : 12
=A0Failed Devices : 0
=A0 Spare Devices : 0
     Chunk Size : 512K
           Name : file00bert.woodlea.org.uk:0=A0 (l=
ocal to host=20
file00bert.woodlea.org.uk)
           UUID : 1470c671:4236b155:67287625:899db1=
53
         Events : 254890
  =A0 Number   Major   Minor   RaidDevice State
       0       8    =A0 113    =
  =A0 0    =A0 active sync   /dev/sdh1
       1       8       17    =
  =A0 1    =A0 active sync   /dev/sdb1
       2       8    =A0 177    =
  =A0 2    =A0 active sync   /dev/sdl1
       3       0      =A0 0  =A0=
     3    =A0 removed
       4       8       33    =
  =A0 4    =A0 active sync   /dev/sdc1
       5       8    =A0 193    =
  =A0 5    =A0 active sync   /dev/sdm1
       6       0      =A0 0  =A0=
     6    =A0 removed
       7       8       49    =
  =A0 7    =A0 active sync   /dev/sdd1
       8       8    =A0 209    =
  =A0 8    =A0 active sync   /dev/sdn1
       9       8    =A0 161    =
  =A0 9    =A0 active sync   /dev/sdk1
    =A0 10       0      =A0 0    =
   10    =A0 removed
    =A0 11       8    =A0 225    =A0=
=A0 11    =A0 active sync   /dev/sdo1
    =A0 12       8       81    =
   12    =A0 active sync   /dev/sdf1
    =A0 13       8    =A0 241    =A0=
=A0 13    =A0 active sync   /dev/sdp1
    =A0 14       8      =A0 1    =
   14    =A0 active sync   /dev/sda1



The output from the=A0failed drives are as follows.


mdadm --examine /dev/sde1
/dev/sde1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x1
     Array UUID : 1470c671:4236b155:67287625:899db153
           Name : file00bert.woodlea.org.uk:0=A0 (l=
ocal to host=20
file00bert.woodlea.org.uk)
=A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
     Raid Level : raid6
   Raid Devices : 15
=A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
     Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
=A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
  =A0 Data Offset : 272 sectors
   Super Offset : 8 sectors
        =A0 State : clean
  =A0 Device UUID : 3e284f2e:d939fb97:0b74eb88:326e879c
Internal Bitmap : 2 sectors from superblock
  =A0 Update Time : Tue Mar=A0 1 21:53:31 2011
       Checksum : 768f0f34 - correct
         Events : 254591
     Chunk Size : 512K
   Device Role : Active device 10
   Array State : AAA.AA.AAAAAAAA ('A' == active, '.' == mis=
sing)


The above=A0is the drive that failed tonight, and the one I would like =
to re add=20
back into the array. There have been no writes to the filesystem on the=
array in=20
the last couple of days (other than what ext4 would do on it's own).


=A0mdadm --examine /dev/sdi1
/dev/sdi1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x1
     Array UUID : 1470c671:4236b155:67287625:899db153
           Name : file00bert.woodlea.org.uk:0=A0 (l=
ocal to host=20
file00bert.woodlea.org.uk)
=A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
     Raid Level : raid6
   Raid Devices : 15
=A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
     Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
=A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
  =A0 Data Offset : 272 sectors
   Super Offset : 8 sectors
        =A0 State : active
  =A0 Device UUID : 8e668e39:06d8281b:b79aa3ab:a1d55fb5
Internal Bitmap : 2 sectors from superblock
  =A0 Update Time : Thu Feb 10 18:20:54 2011
       Checksum : 4078396b - correct
         Events : 254075
     Chunk Size : 512K
   Device Role : Active device 3
   Array State : AAAAAA.AAAAAAAA ('A' == active, '.' == mis=
sing)


mdadm --examine /dev/sdj1
/dev/sdj1:
        =A0 Magic : a92b4efc
      =A0 Version : 1.2
  =A0 Feature Map : 0x1
     Array UUID : 1470c671:4236b155:67287625:899db153
           Name : file00bert.woodlea.org.uk:0=A0 (l=
ocal to host=20
file00bert.woodlea.org.uk)
=A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
     Raid Level : raid6
   Raid Devices : 15
=A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
     Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
=A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
  =A0 Data Offset : 272 sectors
   Super Offset : 8 sectors
        =A0 State : active
  =A0 Device UUID : 37d422cc:8436960a:c3c4d11c:81a8e4fa
Internal Bitmap : 2 sectors from superblock
  =A0 Update Time : Thu Oct 21 23:45:06 2010
       Checksum : 78950bb5 - correct
         Events : 21435
     Chunk Size : 512K
   Device Role : Active device 6
   Array State : AAAAAAAAAAAAAAA ('A' == active, '.' == mis=
sing)


Looks like sdj1 failed waaay back in Oct last year (sigh). As I said, I=
am not=20
to bothered about adding these last 2 drives back into the array, since=
they=20
failed so long ago. I have a couple of spare drives sitting here, and I=
will=20
replace these 2 drives with them (once I have completed a badblocks on =
them).=20
Looking at the output of dmesg, there are no other errors showing for t=
he 3=20
drives, other than them being kicked out of the array for being non fre=
sh.

I guess I have a couple of questions.

What's the correct process for adding the failed /dev/sde1 back into th=
e array=20
so I can start it. I don't want to rush into this and make things worse=


What's the correct process for replacing the 2 other drives?
I am presuming that I need to --fail, then --remove then --add the driv=
es (one=20
at a time?), but I want to make sure.


Thanks for your help.


Graham.


=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Failed RAID 6 array advice

am 02.03.2011 06:26:32 von Mikael Abrahamsson

On Tue, 1 Mar 2011, jahammonds prost wrote:

> What's the correct process for adding the failed /dev/sde1 back into the
> array so I can start it. I don't want to rush into this and make things
> worse.

There are a lot of discussions about this in the archives, but basically I
recommend the following:

Make sure you're running the latest mdadm, right now it's 3.1.4. Compile
it yourself if you have to. After that you stop the array and use
--assemble --force to get the array up and running again with the drives
you know are good (make sure you don't use the drives that was offlined a
long time ago).

> What's the correct process for replacing the 2 other drives?
> I am presuming that I need to --fail, then --remove then --add the drives (one
> at a time?), but I want to make sure.

Yes, when you have a working degraded array you just add them and a
re-sync should happen and then everything should be ok if the resync
succeeds.

--
Mikael Abrahamsson email: swmike@swm.pp.se
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Failed RAID 6 array advice

am 02.03.2011 06:26:57 von NeilBrown

On Tue, 1 Mar 2011 21:05:33 -0800 (PST) jahammonds prost o.com>
wrote:

> I've just had a 3rd drive fail on one of my RAID 6 arrays, and I'm lo=
oking for=20
> some advice on how to get it back enough that I can=A0recover the dat=
a, and then=20
> replacing the other failed drives.
>=20
>=20
> mdadm -V
> mdadm - v3.0.3 - 22nd October 2009
>=20
>=20
> Not the most up to date release, but it seems to be the latest one av=
ailable on=20
> FC12
>=20
>=20
>=20
> The /etc/mdadm.conf file is
>=20
> ARRAY /dev/md0 uuid=3D1470c671:4236b155:67287625:899db153
>=20
>=20
> Which explains why I didn't get emailed about the drive failures. Thi=
s isn't my=20
> standard file, and I don't know how it was changed, but that's anothe=
r issue for=20
> another day.
>=20
>=20
>=20
> mdadm --detail /dev/md0
> /dev/md0:
>       =A0 Version : 1.2
> =A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
>      Raid Level : raid6
> =A0 Used Dev Size : 488383488 (465.76 GiB 500.10 GB)
>    Raid Devices : 15
> =A0 Total Devices : 12
>   =A0 Persistence : Superblock is persistent
>   =A0 Update Time : Tue Mar=A0 1 22:17:41 2011
>         =A0 State : active, degraded, Not Started
> =A0Active Devices : 12
> Working Devices : 12
> =A0Failed Devices : 0
> =A0 Spare Devices : 0
>      Chunk Size : 512K
>            Name : file00bert.woodlea.org.uk:0=A0 =
(local to host=20
> file00bert.woodlea.org.uk)
>            UUID : 1470c671:4236b155:67287625:899d=
b153
>          Events : 254890
>   =A0 Number   Major   Minor   RaidDevice State
>        0       8    =A0 113    =
  =A0 0    =A0 active sync   /dev/sdh1
>        1       8       17  =A0=
     1    =A0 active sync   /dev/sdb1
>        2       8    =A0 177    =
  =A0 2    =A0 active sync   /dev/sdl1
>        3       0      =A0 0  =
    =A0 3    =A0 removed
>        4       8       33  =A0=
     4    =A0 active sync   /dev/sdc1
>        5       8    =A0 193    =
  =A0 5    =A0 active sync   /dev/sdm1
>        6       0      =A0 0  =
    =A0 6    =A0 removed
>        7       8       49  =A0=
     7    =A0 active sync   /dev/sdd1
>        8       8    =A0 209    =
  =A0 8    =A0 active sync   /dev/sdn1
>        9       8    =A0 161    =
  =A0 9    =A0 active sync   /dev/sdk1
>     =A0 10       0      =A0 0  =A0=
  =A0 10    =A0 removed
>     =A0 11       8    =A0 225    =
   11    =A0 active sync   /dev/sdo1
>     =A0 12       8       81    =
   12    =A0 active sync   /dev/sdf1
>     =A0 13       8    =A0 241    =
   13    =A0 active sync   /dev/sdp1
>     =A0 14       8      =A0 1  =A0=
  =A0 14    =A0 active sync   /dev/sda1
>=20
>=20
>=20
> The output from the=A0failed drives are as follows.
>=20
>=20
> mdadm --examine /dev/sde1
> /dev/sde1:
>         =A0 Magic : a92b4efc
>       =A0 Version : 1.2
>   =A0 Feature Map : 0x1
>      Array UUID : 1470c671:4236b155:67287625:899db153
>            Name : file00bert.woodlea.org.uk:0=A0 =
(local to host=20
> file00bert.woodlea.org.uk)
> =A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
>      Raid Level : raid6
>    Raid Devices : 15
> =A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
>      Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
> =A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
>   =A0 Data Offset : 272 sectors
>    Super Offset : 8 sectors
>         =A0 State : clean
>   =A0 Device UUID : 3e284f2e:d939fb97:0b74eb88:326e879c
> Internal Bitmap : 2 sectors from superblock
>   =A0 Update Time : Tue Mar=A0 1 21:53:31 2011
>        Checksum : 768f0f34 - correct
>          Events : 254591
>      Chunk Size : 512K
>    Device Role : Active device 10
>    Array State : AAA.AA.AAAAAAAA ('A' == active, '.' == m=
issing)
>=20
>=20
> The above=A0is the drive that failed tonight, and the one I would lik=
e to re add=20
> back into the array. There have been no writes to the filesystem on t=
he array in=20
> the last couple of days (other than what ext4 would do on it's own).
>=20
>=20
> =A0mdadm --examine /dev/sdi1
> /dev/sdi1:
>         =A0 Magic : a92b4efc
>       =A0 Version : 1.2
>   =A0 Feature Map : 0x1
>      Array UUID : 1470c671:4236b155:67287625:899db153
>            Name : file00bert.woodlea.org.uk:0=A0 =
(local to host=20
> file00bert.woodlea.org.uk)
> =A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
>      Raid Level : raid6
>    Raid Devices : 15
> =A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
>      Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
> =A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
>   =A0 Data Offset : 272 sectors
>    Super Offset : 8 sectors
>         =A0 State : active
>   =A0 Device UUID : 8e668e39:06d8281b:b79aa3ab:a1d55fb5
> Internal Bitmap : 2 sectors from superblock
>   =A0 Update Time : Thu Feb 10 18:20:54 2011
>        Checksum : 4078396b - correct
>          Events : 254075
>      Chunk Size : 512K
>    Device Role : Active device 3
>    Array State : AAAAAA.AAAAAAAA ('A' == active, '.' == m=
issing)
>=20
>=20
> mdadm --examine /dev/sdj1
> /dev/sdj1:
>         =A0 Magic : a92b4efc
>       =A0 Version : 1.2
>   =A0 Feature Map : 0x1
>      Array UUID : 1470c671:4236b155:67287625:899db153
>            Name : file00bert.woodlea.org.uk:0=A0 =
(local to host=20
> file00bert.woodlea.org.uk)
> =A0 Creation Time : Sat Jun=A0 5 10:38:11 2010
>      Raid Level : raid6
>    Raid Devices : 15
> =A0Avail Dev Size : 976767730 (465.76 GiB 500.11 GB)
>      Array Size : 12697970688 (6054.86 GiB 6501.36 GB)
> =A0 Used Dev Size : 976766976 (465.76 GiB 500.10 GB)
>   =A0 Data Offset : 272 sectors
>    Super Offset : 8 sectors
>         =A0 State : active
>   =A0 Device UUID : 37d422cc:8436960a:c3c4d11c:81a8e4fa
> Internal Bitmap : 2 sectors from superblock
>   =A0 Update Time : Thu Oct 21 23:45:06 2010
>        Checksum : 78950bb5 - correct
>          Events : 21435
>      Chunk Size : 512K
>    Device Role : Active device 6
>    Array State : AAAAAAAAAAAAAAA ('A' == active, '.' == m=
issing)
>=20
>=20
> Looks like sdj1 failed waaay back in Oct last year (sigh). As I said,=
I am not=20
> to bothered about adding these last 2 drives back into the array, sin=
ce they=20
> failed so long ago. I have a couple of spare drives sitting here, and=
I will=20
> replace these 2 drives with them (once I have completed a badblocks o=
n them).=20
> Looking at the output of dmesg, there are no other errors showing for=
the 3=20
> drives, other than them being kicked out of the array for being non f=
resh.
>=20
> I guess I have a couple of questions.
>=20
> What's the correct process for adding the failed /dev/sde1 back into =
the array=20
> so I can start it. I don't want to rush into this and make things wor=
se.

If you think that the drives really are working and that it was a cabli=
ng
problem then stop the array (if it isn't stopped already) and assemble =
with
--force:

mdadm --assemble --force /dev/md0 /dev....list of devices

Then find the devices that it chose not to include and add them individ=
ually
mdadm /dev/md0 --add /dev/something

However if any device has a bad block that cannot be read, then this wo=
n't
work.
In that case you need to get a new device, partition it to have a parti=
tion
EXACTLY the same size, use
dd_rescue
to copy all the good data from the bad drive to the new drive, remove t=
he bad
drive from the system, and use the "--assemble --force" command using t=
he new
drive, not the old drive.


>=20
> What's the correct process for replacing the 2 other drives?
> I am presuming that I need to --fail, then --remove then --add the dr=
ives (one=20
> at a time?), but I want to make sure.

There are already failed and removed so there is no point in trying to =
do
that again

Good luck.

NeilBrown


>=20
>=20
> Thanks for your help.
>=20
>=20
> Graham.
>=20
>=20
> =20
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html