raid6 issues
am 16.06.2011 22:28:17 von Chad Walker
I have 15 drives in a raid6 plus a spare. I returned home after being
gone for 12 days and one of the drives was marked as faulty. The load
on the machine was crazy, and mdadm stop responding. I should've done
an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
rebooted and mdadm started recovering, but to the faulty drive. I
checked in on /proc/mdstat periodically over the 35-hour recovery.
When it was down to the last bit, /proc/mdstat and mdadm stopped
responding again. I gave it 28 hours, and then when I still couldn't
get any insight into it I rebooted again. Now /proc/mdstat says it's
inactive. And I don't appear to be able to assemble it. I issued
--examine on each of the 16 drives and they all agreed with each other
except for the faulty drive. I popped the faulty drive out and
rebooted again, still no luck assembling.
This is what my /proc/mdstat looks like:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](S)
sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
29302715520 blocks
unused devices:
This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like:
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : 78e3f473:48bbfc34:0e051622:5c30970b
Creation Time : Wed Mar 30 14:48:46 2011
Raid Level : raid6
Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
Raid Devices : 15
Total Devices : 16
Preferred Minor : 1
Update Time : Wed Jun 15 07:45:12 2011
State : active
Active Devices : 14
Working Devices : 15
Failed Devices : 1
Spare Devices : 1
Checksum : e4ff038f - correct
Events : 38452
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 14 8 17 14 active sync /dev/sdb1
0 0 8 81 0 active sync /dev/sdf1
1 1 8 97 1 active sync /dev/sdg1
2 2 8 113 2 active sync /dev/sdh1
3 3 8 209 3 active sync /dev/sdn1
4 4 8 225 4 active sync /dev/sdo1
5 5 0 0 5 faulty removed
6 6 8 193 6 active sync /dev/sdm1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 177 8 active sync /dev/sdl1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 145 10 active sync /dev/sdj1
11 11 8 65 11 active sync /dev/sde1
12 12 8 49 12 active sync /dev/sdd1
13 13 8 33 13 active sync /dev/sdc1
14 14 8 17 14 active sync /dev/sdb1
15 15 65 1 15 spare /dev/sdq1
And this is what --examine for /dev/sdp1 looked like:
/dev/sdp1:
Magic : a92b4efc
Version : 0.90.00
UUID : 78e3f473:48bbfc34:0e051622:5c30970b
Creation Time : Wed Mar 30 14:48:46 2011
Raid Level : raid6
Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
Raid Devices : 15
Total Devices : 16
Preferred Minor : 1
Update Time : Tue Jun 14 07:35:56 2011
State : active
Active Devices : 15
Working Devices : 16
Failed Devices : 0
Spare Devices : 1
Checksum : e4fdb07b - correct
Events : 38433
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 241 5 active sync /dev/sdp1
0 0 8 81 0 active sync /dev/sdf1
1 1 8 97 1 active sync /dev/sdg1
2 2 8 113 2 active sync /dev/sdh1
3 3 8 209 3 active sync /dev/sdn1
4 4 8 225 4 active sync /dev/sdo1
5 5 8 241 5 active sync /dev/sdp1
6 6 8 193 6 active sync /dev/sdm1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 177 8 active sync /dev/sdl1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 145 10 active sync /dev/sdj1
11 11 8 65 11 active sync /dev/sde1
12 12 8 49 12 active sync /dev/sdd1
13 13 8 33 13 active sync /dev/sdc1
14 14 8 17 14 active sync /dev/sdb1
15 15 65 1 15 spare /dev/sdq1
I was scared to run mdadm --build --level=6 --raid-devices=15 /dev/md1
/dev/sdf1 /dev/sdg1....
system information:
Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650SE
Any advice? There's about 1TB of data on these drives that would cause
my wife to kill me (and about 9TB of data would just irritate her to
loose).
-chad
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid6 issues
am 18.06.2011 21:48:18 von Chad Walker
Anyone? Please help. I've been searching for answers for the last five
days. The (S) after the drives in the /proc/mdstat means that it
thinks they are all spares? I've seen some mention of an
'--assume-clean' option but I can't find any documentation on it. I'm
running 3.1.4 (what apt-get got), but I see on Neil Brown's site that
in the release for 3.1.5 there are 'Fixes for "--assemble --force" in
various unusual cases' and 'Allow "--assemble --update=3Dno-bitmap" so
an array with a corrupt bitmap can still be assembled', would either
of these be applicable in my case? I will build 3.1.5 and see if it
helps.
-chad
On Thu, Jun 16, 2011 at 1:28 PM, Chad Walker
wrote:
> I have 15 drives in a raid6 plus a spare. I returned home after being
> gone for 12 days and one of the drives was marked as faulty. The load
> on the machine was crazy, and mdadm stop responding. I should've done
> an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
> rebooted and mdadm started recovering, but to the faulty drive. I
> checked in on /proc/mdstat periodically over the 35-hour recovery.
> When it was down to the last bit, /proc/mdstat and mdadm stopped
> responding again. I gave it 28 hours, and then when I still couldn't
> get any insight into it I rebooted again. Now /proc/mdstat says it's
> inactive. And I don't appear to be able to assemble it. I issued
> --examine on each of the 16 drives and they all agreed with each othe=
r
> except for the faulty drive. I popped the faulty drive out and
> rebooted again, still no luck assembling.
>
> This is what my /proc/mdstat looks like:
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](S=
)
> sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
> sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
> =A0 =A0 =A029302715520 blocks
>
> unused devices:
>
> This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like:
> /dev/sdb1:
> =A0 =A0 =A0 =A0 =A0Magic : a92b4efc
> =A0 =A0 =A0 =A0Version : 0.90.00
> =A0 =A0 =A0 =A0 =A0 UUID : 78e3f473:48bbfc34:0e051622:5c30970b
> =A0Creation Time : Wed Mar 30 14:48:46 2011
> =A0 =A0 Raid Level : raid6
> =A0Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
> =A0 =A0 Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
> =A0 Raid Devices : 15
> =A0Total Devices : 16
> Preferred Minor : 1
>
> =A0 =A0Update Time : Wed Jun 15 07:45:12 2011
> =A0 =A0 =A0 =A0 =A0State : active
> =A0Active Devices : 14
> Working Devices : 15
> =A0Failed Devices : 1
> =A0Spare Devices : 1
> =A0 =A0 =A0 Checksum : e4ff038f - correct
> =A0 =A0 =A0 =A0 Events : 38452
>
> =A0 =A0 =A0 =A0 Layout : left-symmetric
> =A0 =A0 Chunk Size : 64K
>
> =A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
> this =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
>
> =A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 81 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdf1
> =A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 97 =A0 =A0 =A0 =A01 =A0 =A0=
=A0active sync =A0 /dev/sdg1
> =A0 2 =A0 =A0 2 =A0 =A0 =A0 8 =A0 =A0 =A0113 =A0 =A0 =A0 =A02 =A0 =A0=
=A0active sync =A0 /dev/sdh1
> =A0 3 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0209 =A0 =A0 =A0 =A03 =A0 =A0=
=A0active sync =A0 /dev/sdn1
> =A0 4 =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0225 =A0 =A0 =A0 =A04 =A0 =A0=
=A0active sync =A0 /dev/sdo1
> =A0 5 =A0 =A0 5 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A05 =A0 =A0=
=A0faulty removed
> =A0 6 =A0 =A0 6 =A0 =A0 =A0 8 =A0 =A0 =A0193 =A0 =A0 =A0 =A06 =A0 =A0=
=A0active sync =A0 /dev/sdm1
> =A0 7 =A0 =A0 7 =A0 =A0 =A0 8 =A0 =A0 =A0129 =A0 =A0 =A0 =A07 =A0 =A0=
=A0active sync =A0 /dev/sdi1
> =A0 8 =A0 =A0 8 =A0 =A0 =A0 8 =A0 =A0 =A0177 =A0 =A0 =A0 =A08 =A0 =A0=
=A0active sync =A0 /dev/sdl1
> =A0 9 =A0 =A0 9 =A0 =A0 =A0 8 =A0 =A0 =A0161 =A0 =A0 =A0 =A09 =A0 =A0=
=A0active sync =A0 /dev/sdk1
> =A010 =A0 =A010 =A0 =A0 =A0 8 =A0 =A0 =A0145 =A0 =A0 =A0 10 =A0 =A0 =A0=
active sync =A0 /dev/sdj1
> =A011 =A0 =A011 =A0 =A0 =A0 8 =A0 =A0 =A0 65 =A0 =A0 =A0 11 =A0 =A0 =A0=
active sync =A0 /dev/sde1
> =A012 =A0 =A012 =A0 =A0 =A0 8 =A0 =A0 =A0 49 =A0 =A0 =A0 12 =A0 =A0 =A0=
active sync =A0 /dev/sdd1
> =A013 =A0 =A013 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 13 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
> =A014 =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
> =A015 =A0 =A015 =A0 =A0 =A065 =A0 =A0 =A0 =A01 =A0 =A0 =A0 15 =A0 =A0=
=A0spare =A0 /dev/sdq1
>
> And this is what --examine for /dev/sdp1 looked like:
> /dev/sdp1:
> =A0 =A0 =A0 =A0 =A0Magic : a92b4efc
> =A0 =A0 =A0 =A0Version : 0.90.00
> =A0 =A0 =A0 =A0 =A0 UUID : 78e3f473:48bbfc34:0e051622:5c30970b
> =A0Creation Time : Wed Mar 30 14:48:46 2011
> =A0 =A0 Raid Level : raid6
> =A0Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
> =A0 =A0 Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
> =A0 Raid Devices : 15
> =A0Total Devices : 16
> Preferred Minor : 1
>
> =A0 =A0Update Time : Tue Jun 14 07:35:56 2011
> =A0 =A0 =A0 =A0 =A0State : active
> =A0Active Devices : 15
> Working Devices : 16
> =A0Failed Devices : 0
> =A0Spare Devices : 1
> =A0 =A0 =A0 Checksum : e4fdb07b - correct
> =A0 =A0 =A0 =A0 Events : 38433
>
> =A0 =A0 =A0 =A0 Layout : left-symmetric
> =A0 =A0 Chunk Size : 64K
>
> =A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
> this =A0 =A0 5 =A0 =A0 =A0 8 =A0 =A0 =A0241 =A0 =A0 =A0 =A05 =A0 =A0 =
=A0active sync =A0 /dev/sdp1
>
> =A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 81 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdf1
> =A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 97 =A0 =A0 =A0 =A01 =A0 =A0=
=A0active sync =A0 /dev/sdg1
> =A0 2 =A0 =A0 2 =A0 =A0 =A0 8 =A0 =A0 =A0113 =A0 =A0 =A0 =A02 =A0 =A0=
=A0active sync =A0 /dev/sdh1
> =A0 3 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0209 =A0 =A0 =A0 =A03 =A0 =A0=
=A0active sync =A0 /dev/sdn1
> =A0 4 =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0225 =A0 =A0 =A0 =A04 =A0 =A0=
=A0active sync =A0 /dev/sdo1
> =A0 5 =A0 =A0 5 =A0 =A0 =A0 8 =A0 =A0 =A0241 =A0 =A0 =A0 =A05 =A0 =A0=
=A0active sync =A0 /dev/sdp1
> =A0 6 =A0 =A0 6 =A0 =A0 =A0 8 =A0 =A0 =A0193 =A0 =A0 =A0 =A06 =A0 =A0=
=A0active sync =A0 /dev/sdm1
> =A0 7 =A0 =A0 7 =A0 =A0 =A0 8 =A0 =A0 =A0129 =A0 =A0 =A0 =A07 =A0 =A0=
=A0active sync =A0 /dev/sdi1
> =A0 8 =A0 =A0 8 =A0 =A0 =A0 8 =A0 =A0 =A0177 =A0 =A0 =A0 =A08 =A0 =A0=
=A0active sync =A0 /dev/sdl1
> =A0 9 =A0 =A0 9 =A0 =A0 =A0 8 =A0 =A0 =A0161 =A0 =A0 =A0 =A09 =A0 =A0=
=A0active sync =A0 /dev/sdk1
> =A010 =A0 =A010 =A0 =A0 =A0 8 =A0 =A0 =A0145 =A0 =A0 =A0 10 =A0 =A0 =A0=
active sync =A0 /dev/sdj1
> =A011 =A0 =A011 =A0 =A0 =A0 8 =A0 =A0 =A0 65 =A0 =A0 =A0 11 =A0 =A0 =A0=
active sync =A0 /dev/sde1
> =A012 =A0 =A012 =A0 =A0 =A0 8 =A0 =A0 =A0 49 =A0 =A0 =A0 12 =A0 =A0 =A0=
active sync =A0 /dev/sdd1
> =A013 =A0 =A013 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 13 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
> =A014 =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
> =A015 =A0 =A015 =A0 =A0 =A065 =A0 =A0 =A0 =A01 =A0 =A0 =A0 15 =A0 =A0=
=A0spare =A0 /dev/sdq1
>
> I was scared to run mdadm --build --level=3D6 --raid-devices=3D15 /de=
v/md1
> /dev/sdf1 /dev/sdg1....
>
> system information:
> Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650S=
E
>
> Any advice? There's about 1TB of data on these drives that would caus=
e
> my wife to kill me (and about 9TB of data would just irritate her to
> loose).
>
> -chad
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid6 issues
am 18.06.2011 21:55:12 von Chad Walker
also output from "mdadm --assemble --scan --verbose"
mdadm: looking for devices for /dev/md1
mdadm: cannot open device /dev/sdp1: Device or resource busy
mdadm: /dev/sdp1 has wrong uuid.
mdadm: cannot open device /dev/sdo1: Device or resource busy
mdadm: /dev/sdo1 has wrong uuid.
mdadm: cannot open device /dev/sdn1: Device or resource busy
mdadm: /dev/sdn1 has wrong uuid.
mdadm: cannot open device /dev/sdm1: Device or resource busy
mdadm: /dev/sdm1 has wrong uuid.
mdadm: cannot open device /dev/sdl1: Device or resource busy
mdadm: /dev/sdl1 has wrong uuid.
mdadm: cannot open device /dev/sdk1: Device or resource busy
mdadm: /dev/sdk1 has wrong uuid.
mdadm: cannot open device /dev/sdj1: Device or resource busy
mdadm: /dev/sdj1 has wrong uuid.
mdadm: cannot open device /dev/sdi1: Device or resource busy
mdadm: /dev/sdi1 has wrong uuid.
mdadm: cannot open device /dev/sdh1: Device or resource busy
mdadm: /dev/sdh1 has wrong uuid.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has wrong uuid.
mdadm: cannot open device /dev/sdf1: Device or resource busy
mdadm: /dev/sdf1 has wrong uuid.
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: /dev/sde1 has wrong uuid.
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: /dev/sdd1 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdb1: Device or resource busy
mdadm: /dev/sdb1 has wrong uuid.
-chad
On Sat, Jun 18, 2011 at 12:48 PM, Chad Walker
wrote:
> Anyone? Please help. I've been searching for answers for the last fiv=
e
> days. The (S) after the drives in the /proc/mdstat means that it
> thinks they are all spares? I've seen some mention of an
> '--assume-clean' option but I can't find any documentation on it. I'm
> running 3.1.4 (what apt-get got), but I see on Neil Brown's site that
> in the release for 3.1.5 there are 'Fixes for "--assemble --force" in
> various unusual cases' and 'Allow "--assemble --update=3Dno-bitmap" s=
o
> an array with a corrupt bitmap can still be assembled', would either
> of these be applicable in my case? I will build 3.1.5 and see if it
> helps.
>
> -chad
>
>
>
>
> On Thu, Jun 16, 2011 at 1:28 PM, Chad Walker
> wrote:
>> I have 15 drives in a raid6 plus a spare. I returned home after bein=
g
>> gone for 12 days and one of the drives was marked as faulty. The loa=
d
>> on the machine was crazy, and mdadm stop responding. I should've don=
e
>> an strace, sorry. Likewise cat'ing /proc/mdstat was blocking. I
>> rebooted and mdadm started recovering, but to the faulty drive. I
>> checked in on /proc/mdstat periodically over the 35-hour recovery.
>> When it was down to the last bit, /proc/mdstat and mdadm stopped
>> responding again. I gave it 28 hours, and then when I still couldn't
>> get any insight into it I rebooted again. Now /proc/mdstat says it's
>> inactive. And I don't appear to be able to assemble it. I issued
>> --examine on each of the 16 drives and they all agreed with each oth=
er
>> except for the faulty drive. I popped the faulty drive out and
>> rebooted again, still no luck assembling.
>>
>> This is what my /proc/mdstat looks like:
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md1 : inactive sdd1[12](S) sdm1[6](S) sdf1[0](S) sdh1[2](S) sdi1[7](=
S)
>> sdb1[14](S) sdo1[4](S) sdg1[1](S) sdl1[8](S) sdk1[9](S) sdc1[13](S)
>> sdn1[3](S) sdj1[10](S) sdp1[15](S) sde1[11](S)
>> =A0 =A0 =A029302715520 blocks
>>
>> unused devices:
>>
>> This is what the --examine for /dev/sd[b-o]1 and /dev/sdq1 look like=
:
>> /dev/sdb1:
>> =A0 =A0 =A0 =A0 =A0Magic : a92b4efc
>> =A0 =A0 =A0 =A0Version : 0.90.00
>> =A0 =A0 =A0 =A0 =A0 UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>> =A0Creation Time : Wed Mar 30 14:48:46 2011
>> =A0 =A0 Raid Level : raid6
>> =A0Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>> =A0 =A0 Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>> =A0 Raid Devices : 15
>> =A0Total Devices : 16
>> Preferred Minor : 1
>>
>> =A0 =A0Update Time : Wed Jun 15 07:45:12 2011
>> =A0 =A0 =A0 =A0 =A0State : active
>> =A0Active Devices : 14
>> Working Devices : 15
>> =A0Failed Devices : 1
>> =A0Spare Devices : 1
>> =A0 =A0 =A0 Checksum : e4ff038f - correct
>> =A0 =A0 =A0 =A0 Events : 38452
>>
>> =A0 =A0 =A0 =A0 Layout : left-symmetric
>> =A0 =A0 Chunk Size : 64K
>>
>> =A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
>> this =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
>>
>> =A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 81 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdf1
>> =A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 97 =A0 =A0 =A0 =A01 =A0 =A0=
=A0active sync =A0 /dev/sdg1
>> =A0 2 =A0 =A0 2 =A0 =A0 =A0 8 =A0 =A0 =A0113 =A0 =A0 =A0 =A02 =A0 =A0=
=A0active sync =A0 /dev/sdh1
>> =A0 3 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0209 =A0 =A0 =A0 =A03 =A0 =A0=
=A0active sync =A0 /dev/sdn1
>> =A0 4 =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0225 =A0 =A0 =A0 =A04 =A0 =A0=
=A0active sync =A0 /dev/sdo1
>> =A0 5 =A0 =A0 5 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A05 =A0 =
=A0 =A0faulty removed
>> =A0 6 =A0 =A0 6 =A0 =A0 =A0 8 =A0 =A0 =A0193 =A0 =A0 =A0 =A06 =A0 =A0=
=A0active sync =A0 /dev/sdm1
>> =A0 7 =A0 =A0 7 =A0 =A0 =A0 8 =A0 =A0 =A0129 =A0 =A0 =A0 =A07 =A0 =A0=
=A0active sync =A0 /dev/sdi1
>> =A0 8 =A0 =A0 8 =A0 =A0 =A0 8 =A0 =A0 =A0177 =A0 =A0 =A0 =A08 =A0 =A0=
=A0active sync =A0 /dev/sdl1
>> =A0 9 =A0 =A0 9 =A0 =A0 =A0 8 =A0 =A0 =A0161 =A0 =A0 =A0 =A09 =A0 =A0=
=A0active sync =A0 /dev/sdk1
>> =A010 =A0 =A010 =A0 =A0 =A0 8 =A0 =A0 =A0145 =A0 =A0 =A0 10 =A0 =A0 =
=A0active sync =A0 /dev/sdj1
>> =A011 =A0 =A011 =A0 =A0 =A0 8 =A0 =A0 =A0 65 =A0 =A0 =A0 11 =A0 =A0 =
=A0active sync =A0 /dev/sde1
>> =A012 =A0 =A012 =A0 =A0 =A0 8 =A0 =A0 =A0 49 =A0 =A0 =A0 12 =A0 =A0 =
=A0active sync =A0 /dev/sdd1
>> =A013 =A0 =A013 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 13 =A0 =A0 =
=A0active sync =A0 /dev/sdc1
>> =A014 =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =
=A0active sync =A0 /dev/sdb1
>> =A015 =A0 =A015 =A0 =A0 =A065 =A0 =A0 =A0 =A01 =A0 =A0 =A0 15 =A0 =A0=
=A0spare =A0 /dev/sdq1
>>
>> And this is what --examine for /dev/sdp1 looked like:
>> /dev/sdp1:
>> =A0 =A0 =A0 =A0 =A0Magic : a92b4efc
>> =A0 =A0 =A0 =A0Version : 0.90.00
>> =A0 =A0 =A0 =A0 =A0 UUID : 78e3f473:48bbfc34:0e051622:5c30970b
>> =A0Creation Time : Wed Mar 30 14:48:46 2011
>> =A0 =A0 Raid Level : raid6
>> =A0Used Dev Size : 1953514368 (1863.02 GiB 2000.40 GB)
>> =A0 =A0 Array Size : 25395686784 (24219.21 GiB 26005.18 GB)
>> =A0 Raid Devices : 15
>> =A0Total Devices : 16
>> Preferred Minor : 1
>>
>> =A0 =A0Update Time : Tue Jun 14 07:35:56 2011
>> =A0 =A0 =A0 =A0 =A0State : active
>> =A0Active Devices : 15
>> Working Devices : 16
>> =A0Failed Devices : 0
>> =A0Spare Devices : 1
>> =A0 =A0 =A0 Checksum : e4fdb07b - correct
>> =A0 =A0 =A0 =A0 Events : 38433
>>
>> =A0 =A0 =A0 =A0 Layout : left-symmetric
>> =A0 =A0 Chunk Size : 64K
>>
>> =A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
>> this =A0 =A0 5 =A0 =A0 =A0 8 =A0 =A0 =A0241 =A0 =A0 =A0 =A05 =A0 =A0=
=A0active sync =A0 /dev/sdp1
>>
>> =A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 81 =A0 =A0 =A0 =A00 =A0 =A0=
=A0active sync =A0 /dev/sdf1
>> =A0 1 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 97 =A0 =A0 =A0 =A01 =A0 =A0=
=A0active sync =A0 /dev/sdg1
>> =A0 2 =A0 =A0 2 =A0 =A0 =A0 8 =A0 =A0 =A0113 =A0 =A0 =A0 =A02 =A0 =A0=
=A0active sync =A0 /dev/sdh1
>> =A0 3 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0209 =A0 =A0 =A0 =A03 =A0 =A0=
=A0active sync =A0 /dev/sdn1
>> =A0 4 =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0225 =A0 =A0 =A0 =A04 =A0 =A0=
=A0active sync =A0 /dev/sdo1
>> =A0 5 =A0 =A0 5 =A0 =A0 =A0 8 =A0 =A0 =A0241 =A0 =A0 =A0 =A05 =A0 =A0=
=A0active sync =A0 /dev/sdp1
>> =A0 6 =A0 =A0 6 =A0 =A0 =A0 8 =A0 =A0 =A0193 =A0 =A0 =A0 =A06 =A0 =A0=
=A0active sync =A0 /dev/sdm1
>> =A0 7 =A0 =A0 7 =A0 =A0 =A0 8 =A0 =A0 =A0129 =A0 =A0 =A0 =A07 =A0 =A0=
=A0active sync =A0 /dev/sdi1
>> =A0 8 =A0 =A0 8 =A0 =A0 =A0 8 =A0 =A0 =A0177 =A0 =A0 =A0 =A08 =A0 =A0=
=A0active sync =A0 /dev/sdl1
>> =A0 9 =A0 =A0 9 =A0 =A0 =A0 8 =A0 =A0 =A0161 =A0 =A0 =A0 =A09 =A0 =A0=
=A0active sync =A0 /dev/sdk1
>> =A010 =A0 =A010 =A0 =A0 =A0 8 =A0 =A0 =A0145 =A0 =A0 =A0 10 =A0 =A0 =
=A0active sync =A0 /dev/sdj1
>> =A011 =A0 =A011 =A0 =A0 =A0 8 =A0 =A0 =A0 65 =A0 =A0 =A0 11 =A0 =A0 =
=A0active sync =A0 /dev/sde1
>> =A012 =A0 =A012 =A0 =A0 =A0 8 =A0 =A0 =A0 49 =A0 =A0 =A0 12 =A0 =A0 =
=A0active sync =A0 /dev/sdd1
>> =A013 =A0 =A013 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 13 =A0 =A0 =
=A0active sync =A0 /dev/sdc1
>> =A014 =A0 =A014 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 14 =A0 =A0 =
=A0active sync =A0 /dev/sdb1
>> =A015 =A0 =A015 =A0 =A0 =A065 =A0 =A0 =A0 =A01 =A0 =A0 =A0 15 =A0 =A0=
=A0spare =A0 /dev/sdq1
>>
>> I was scared to run mdadm --build --level=3D6 --raid-devices=3D15 /d=
ev/md1
>> /dev/sdf1 /dev/sdg1....
>>
>> system information:
>> Ubuntu 11.04, kernel 2.6.38, x86_64, mdadm version 3.1.4, 3ware 9650=
SE
>>
>> Any advice? There's about 1TB of data on these drives that would cau=
se
>> my wife to kill me (and about 9TB of data would just irritate her to
>> loose).
>>
>> -chad
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid6 issues
am 19.06.2011 01:01:04 von NeilBrown
On Sat, 18 Jun 2011 12:55:12 -0700 Chad Walker
wrote:
> also output from "mdadm --assemble --scan --verbose"
>
> mdadm: looking for devices for /dev/md1
> mdadm: cannot open device /dev/sdp1: Device or resource busy
> mdadm: /dev/sdp1 has wrong uuid.
> mdadm: cannot open device /dev/sdo1: Device or resource busy
> mdadm: /dev/sdo1 has wrong uuid.
> mdadm: cannot open device /dev/sdn1: Device or resource busy
> mdadm: /dev/sdn1 has wrong uuid.
> mdadm: cannot open device /dev/sdm1: Device or resource busy
> mdadm: /dev/sdm1 has wrong uuid.
> mdadm: cannot open device /dev/sdl1: Device or resource busy
> mdadm: /dev/sdl1 has wrong uuid.
> mdadm: cannot open device /dev/sdk1: Device or resource busy
> mdadm: /dev/sdk1 has wrong uuid.
> mdadm: cannot open device /dev/sdj1: Device or resource busy
> mdadm: /dev/sdj1 has wrong uuid.
> mdadm: cannot open device /dev/sdi1: Device or resource busy
> mdadm: /dev/sdi1 has wrong uuid.
> mdadm: cannot open device /dev/sdh1: Device or resource busy
> mdadm: /dev/sdh1 has wrong uuid.
> mdadm: cannot open device /dev/sdg1: Device or resource busy
> mdadm: /dev/sdg1 has wrong uuid.
> mdadm: cannot open device /dev/sdf1: Device or resource busy
> mdadm: /dev/sdf1 has wrong uuid.
> mdadm: cannot open device /dev/sde1: Device or resource busy
> mdadm: /dev/sde1 has wrong uuid.
> mdadm: cannot open device /dev/sdd1: Device or resource busy
> mdadm: /dev/sdd1 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdb1: Device or resource busy
> mdadm: /dev/sdb1 has wrong uuid.
>
>
Try
mdadm -Ss
the the above --assemble with --force added.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raid6 issues
am 19.06.2011 01:14:16 von Chad Walker
Oh thank you! I guess I figured since the array was inactive, it was
stopped... rebuilding onto the spare now.
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md1 : active raid6 sdp1[15] sdf1[0] sdb1[14] sdc1[13] sdd1[12]
sde1[11] sdj1[10] sdk1[9] sdl1[8] sdi1[7] sdm1[6] sdo1[4] sdn1[3]
sdh1[2] sdg1[1]
25395686784 blocks level 6, 64k chunk, algorithm 2 [15/14]
[UUUUU_UUUUUUUUU]
[>....................] recovery =3D 0.1% (2962644/1953514368)
finish=3D1351.7min speed=3D24049K/sec
unused devices:
-chad
On Sat, Jun 18, 2011 at 6:01 PM, NeilBrown wrote:
> On Sat, 18 Jun 2011 12:55:12 -0700 Chad Walker
die.com>
> wrote:
>
>> also output from "mdadm --assemble --scan --verbose"
>>
>> mdadm: looking for devices for /dev/md1
>> mdadm: cannot open device /dev/sdp1: Device or resource busy
>> mdadm: /dev/sdp1 has wrong uuid.
>> mdadm: cannot open device /dev/sdo1: Device or resource busy
>> mdadm: /dev/sdo1 has wrong uuid.
>> mdadm: cannot open device /dev/sdn1: Device or resource busy
>> mdadm: /dev/sdn1 has wrong uuid.
>> mdadm: cannot open device /dev/sdm1: Device or resource busy
>> mdadm: /dev/sdm1 has wrong uuid.
>> mdadm: cannot open device /dev/sdl1: Device or resource busy
>> mdadm: /dev/sdl1 has wrong uuid.
>> mdadm: cannot open device /dev/sdk1: Device or resource busy
>> mdadm: /dev/sdk1 has wrong uuid.
>> mdadm: cannot open device /dev/sdj1: Device or resource busy
>> mdadm: /dev/sdj1 has wrong uuid.
>> mdadm: cannot open device /dev/sdi1: Device or resource busy
>> mdadm: /dev/sdi1 has wrong uuid.
>> mdadm: cannot open device /dev/sdh1: Device or resource busy
>> mdadm: /dev/sdh1 has wrong uuid.
>> mdadm: cannot open device /dev/sdg1: Device or resource busy
>> mdadm: /dev/sdg1 has wrong uuid.
>> mdadm: cannot open device /dev/sdf1: Device or resource busy
>> mdadm: /dev/sdf1 has wrong uuid.
>> mdadm: cannot open device /dev/sde1: Device or resource busy
>> mdadm: /dev/sde1 has wrong uuid.
>> mdadm: cannot open device /dev/sdd1: Device or resource busy
>> mdadm: /dev/sdd1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdb1: Device or resource busy
>> mdadm: /dev/sdb1 has wrong uuid.
>>
>>
>
> Try
>
> =A0mdadm -Ss
> the the above --assemble with --force added.
>
> NeilBrown
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID6 issues
am 13.09.2011 16:24:39 von NeilBrown
KHN0dXBpZCBhbmRyb2lkIG1haWwgY2xpZW50IGluc2lzdHMgb24gdG9wLXBv c3RpbmcgLSBzb3Jy
eSkKTm8uICBZb3UgY2Fubm90IChlYXNpbHkpIGdldCB0aGF0IGRldmljZSB0 byBiZSBhbiBhY3Rp
dmUgbWVtYmVyIG9mCnRoZSBhcnJheSBhZ2FpbiwgYW5kIGl0IGFsbW9zdCBj ZXJ0YWlubHkgd291
bGRuJ3QgaGVscCBhbnl3YXkuCgpJdCB3b3VsZCBvbmx5IGhlbHAgaWYgdGhl IGRhdGEgeW91IHdh
bnQgaXMgb24gdGhlIGRldmljZSwgYW5kIHRoZQpwYXJpdHkgYmxvY2tzIHRo YXQgYXJlIGJlaW5n
IHVzZWQgdG8gcmVjcmVhdGUgaXQgYXJlIGNvcnJ1cHQuCkkgdGhpbmsgaXQg dmVyeSB1bmxpa2Vs
eSB0aGF0IHRoZXkgYXJlIGNvcnJ1cHQgYnV0IHRoZSBkYXRhIGlzbid0LgoK VGhlIHByb2JsZW0g
c2VlbXMgdG8gYmUgdGhhdCB0aGUgam91cm5hbCBzdXBlcmJsb2NrIGlzIGJh ZC4gIFRoYXQgc2Vl
bXMKdG8gc3VnZ2VzdCB0aGF0IG11Y2ggb2YgdGhlIHJlc3Qgb2YgdGhlIGZp bGVzeXN0ZW0gaXMg
T0suCkkgd291bGQgc3VnZ2VzdCB5b3UgImZzY2sgLW4gLWYiIHRoZSBkZXZp Y2UgYW5kIHNlZSBo
b3cgbXVjaCBpdCB3YW50cwp0byAnZml4Jy4gIElmIGl0IGlzIGp1c3QgYSBm ZXcgdGhpbmdzLCBJ
IHdvdWxkIGp1c3QgbGV0IGZzY2sgZml4IGl0IHVwIGZvciB5b3UuCgpJZiB0 aGVyZSBhcmUgcGFn
ZXMgYW5kIHBhZ2VzIG9mIGVycm9ycyAtIHRoZW4geW91IGhhdmUgYmlnZ2Vy IHByb2JsZW1zLgoK
TmVpbEJyb3duCgoKQW5kcmlhbm8gPGNoaWVmMDAwQGdtYWlsLmNvbT4gd3Jv dGU6Cgo+U3RpbGwg
dHJ5aW5nIHRvIGdldCB0aGUgYXJyYXkgYmFjayB1cC4KPgo+U3RhdHVzOiBD bGVhbiwgZGVncmFk
ZWQgd2l0aCA5IG91dCBvZiAxMCBkaXNrcy4KPk9uZSBkaXNrIC0gcmVtb3Zl ZCBhcyBub24tZnJl
c2guCj4KPmFzIGEgcmVzdWx0IHR3byBvZiBMVnMgY291bGQgbm90IGJlIG1v dW50ZWQ6Cj4KPm1v
dW50OiB3cm9uZyBmcyB0eXBlLCBiYWQgb3B0aW9uLCBiYWQgc3VwZXJibG9j ayBvbiAvZGV2L21h
cHBlci92ZzAtbHYxLAo+ICAgICAgbWlzc2luZyBjb2RlcGFnZSBvciBoZWxw ZXIgcHJvZ3JhbSwg
b3Igb3RoZXIgZXJyb3IKPiAgICAgIEluIHNvbWUgY2FzZXMgdXNlZnVsIGlu Zm8gaXMgZm91bmQg
aW4gc3lzbG9nIC0gdHJ5Cj4gICAgICBkbWVzZyB8IHRhaWwgIG9yIHNvCj4K Pm1vdW50OiB3cm9u
ZyBmcyB0eXBlLCBiYWQgb3B0aW9uLCBiYWQgc3VwZXJibG9jayBvbiAvZGV2 L21hcHBlci92ZzAt
bHYyLAo+ICAgICAgbWlzc2luZyBjb2RlcGFnZSBvciBoZWxwZXIgcHJvZ3Jh bSwgb3Igb3RoZXIg
ZXJyb3IKPiAgICAgIEluIHNvbWUgY2FzZXMgdXNlZnVsIGluZm8gaXMgZm91 bmQgaW4gc3lzbG9n
IC0gdHJ5Cj4gICAgICBkbWVzZyB8IHRhaWwgIG9yIHNvCj4KPlsgMzM1Ny4w MDY4MzNdIEpCRDog
bm8gdmFsaWQgam91cm5hbCBzdXBlcmJsb2NrIGZvdW5kCj5bIDMzNTcuMDA2 ODM3XSBFWFQ0LWZz
IChkbS0xKTogZXJyb3IgbG9hZGluZyBqb3VybmFsCj5bIDMzNTcuMDIyNjAz XSBKQkQ6IG5vIHZh
bGlkIGpvdXJuYWwgc3VwZXJibG9jayBmb3VuZAo+WyAzMzU3LjAyMjYwNl0g RVhUNC1mcyAoZG0t
Mik6IGVycm9yIGxvYWRpbmcgam91cm5hbAo+Cj4KPgo+QXBwYXJlbnRseSB0 aGVyZSBpcyBhIHBy
b2JsZW0gd2l0aCByZS1hZGRpbmcgbm9uLWZyZXNoIGRpc2sgYmFjayB0byB0 aGUgYXJyYXkuCj4K
PiNtZGFkbSAtYSAtdiAvZGV2L21kMCAvZGV2L3NkZgo+bWRhZG06IC9kZXYv c2RmIHJlcG9ydHMg
YmVpbmcgYW4gYWN0aXZlIG1lbWJlciBmb3IgL2Rldi9tZDAsIGJ1dCBhCj4t LXJlLWFkZCBmYWls
cy4KPm1kYWRtOiBub3QgcGVyZm9ybWluZyAtLWFkZCBhcyB0aGF0IHdvdWxk IGNvbnZlcnQgL2Rl
di9zZGYgaW4gdG8gYSBzcGFyZS4KPm1kYWRtOiBUbyBtYWtlIHRoaXMgYSBz cGFyZSwgdXNlICJt
ZGFkbSAtLXplcm8tc3VwZXJibG9jayAvZGV2L3NkZiIgZmlyc3QuCj4KPlF1 ZXN0aW9uOiBJcyB0
aGVyZSBhIHdheSB0byByZXN5bmMgdGhlIGFycmF5IHVzaW5nIHRoYXQgbm9u LWZyZXNoCj5kaXNr
LCBhcyBpdCBtYXkgY29udGFpbiBibG9ja3MgbmVlZGVkIGJ5IHRoZXNlIExW cy4KPkF0IHRoaXMg
c3RhZ2UgSSBkb24ndCByZWFsbHkgd2FudCB0byBhZGQgdGhpcyBkaXNrIGFz IGEgc3BhcmUuCj4K
PkFueSBzdWdnZXN0aW9ucyBwbGVhc2U/Cj4KPgo+dGhhbmtzCj4KPk9uIFR1 ZSwgU2VwIDEzLCAy
MDExIGF0IDg6NDQgUE0sIEFuZHJpYW5vIDxjaGllZjAwMEBnbWFpbC5jb20+ IHdyb3RlOgo+PiBU
aGFua3MgZXZlcnlvbmUsIGxvb2tzIGxpa2UgdGhlIHByb2JsZW0gaXMgc29s dmVkLgo+Pgo+PiBG
b3IgYmVuZWZpdCBvZiBvdGhlcnMgd2hvIG1heSBleHBlcmllbmNlIHNhbWUg aXNzdWUsIGhlcmUg
aXMgd2hhdCBJJ3ZlIGRvbmU6Cj4+Cj4+IC0gdXBncmFkZWQgZmlybXdhcmUg b24gU1QzMjAwMDU0
MkFTIGRpc2tzIC0gZnJvbSBDQzM0IHRvIENDMzUuIEl0IG11c3QKPj4gYmUg ZG9uZSB1c2luZyBv
bmJvYXJkIFNBVEEgaW4gTmF0aXZlIElERSAobm90IFJBSUQvQUhDSSkgbW9k ZS4KPj4gQWZ0ZXIg
cmVjb25uZWN0aW5nIHRoZW0gYmFjayB0byBIQkEsIHNpemUgb2Ygb25lIG9m IHRoZSBvZmZlbmRl
cnMgZml4ZWQgaXRzZWxmIQo+Pgo+PiAtIHJhbiBoZHBhcm0gLU4gcDM5MDcw MjkxNjggL2Rldi9z
ZHggY29tbWFuZCBvbiBvdGhlciB0d28gZGlza3MgYW5kIGl0Cj4+IHdvcmtl ZCAocHJvYmFibHkg
aXQgd29ya3Mgc3RyYWlnaHQgYWZ0ZXIgcmVib290KQo+PiBOb3cgbWRhZG0g LUQgc2hvd3MgdGhl
IGFycmF5IGFzIGNsZWFuLCBkZWdyYWRlZCB3aXRoIG9uZSBkaXNrIGtpY2tl ZAo+PiBvdXQsIHdo
aWNoIGlzIGFub3RoZXIgc3RvcnkgOikKPj4KPj4gbm93IG5lZWQgdG8gcmVz eW5jIGFycmF5IGFu
ZCByZXN0b3JlIHR3byBMVnMgd2hpY2ggaGFzbid0IG1vdW50ZWQgOigKPj4K Pj4gT24gVHVlLCBT
ZXAgMTMsIDIwMTEgYXQgODoyOSBQTSwgUm9tYW4gTWFtZWRvdiA8cm1Acm9t YW5ybS5ydT4gd3Jv
dGU6Cj4+PiBPbiBUdWUsIDEzIFNlcCAyMDExIDE5OjA1OjQxICsxMDAwCj4+ PiBBbmRyaWFubyA8
Y2hpZWYwMDBAZ21haWwuY29tPiB3cm90ZToKPj4+Cj4+Pj4gQ29ubmVjdGVk IG9uZSBvZiB0aGUg
b2ZmZW5kZXJzIHRvIEhCQSBwb3J0LCBhbmQgaGRwYXJtIG91dHB1dHMgdGhp czoKPj4+Pgo+Pj4+
ICNoZHBhcm0gLU4gL2Rldi9zZGgKPj4+Pgo+Pj4+IC9kZXYvc2RoOgo+Pj4+ IMKgbWF4IHNlY3Rv
cnMgwqAgPSAzOTA3MDI3MDU1LzE0NzE1MDU2KDE4NDQ2NzQ0MDczMzIxNjEz NDg4PyksIEhQQQo+
Pj4+IHNldHRpbmcgc2VlbXMgaW52YWxpZCAoYnVnZ3kga2VybmVsIGRldmlj ZSBkcml2ZXI/KQo+
Pj4KPj4+IFlvdSBjb3VsZCBqdXN0IHRyeSAiaGRwYXJtIC1OIHAzOTA3MDI5 MTY4IiAoY2FwYWNp
dHkgb2YgdGhlICdsYXJnZXInIGRpc2tzKSwgYnV0IHRoYXQgY291bGQgZmFp bCBpZiB0aGUgZGV2
aWNlIGRyaXZlciBpcyBpbmRlZWQgYnVnZ3kuCj4+Pgo+Pj4gQW5vdGhlciBw b3NzaWJsZSBjb3Vy
c2Ugb2YgYWN0aW9uIHdvdWxkIGJlIHRvIHRyeSB0aGF0IG9uIHNvbWUgb3Ro ZXIgY29udHJvbGxl
ci4KPj4+IEZvciBleGFtcGxlIG9uIHlvdXIgbW90aGVyYm9hcmQgeW91IGhh dmUgdHdvIHZpb2xl
dCBwb3J0cywgaHR0cDovL3d3dy5naWdhYnl0ZS5ydS9wcm9kdWN0cy91cGxv YWQvcHJvZHVjdHMv
MTQ3MC8xMDBhLmpwZwo+Pj4gdGhvc2UgYXJlIG1hbmFnZWQgYnkgdGhlIEpN aWNyb24gSk1CMzYz
IGNvbnRyb2xsZXIsIHRyeSBwbHVnZ2luZyB0aGUgZGlza3Mgd2hpY2ggbmVl ZCBIUEEgdG8gYmUg
cmVtb3ZlZCB0byB0aG9zZSBwb3J0cywgQUZBSVIgdGhhdCBKTWljcm9uIGNv bnRyb2xsZXIgd29y
a3Mgd2l0aCAiaGRwYXJtIC1OIiBqdXN0IGZpbmUuCj4+Pgo+Pj4gLS0KPj4+ IFdpdGggcmVzcGVj
dCwKPj4+IFJvbWFuCj4+Pgo+Pgo+LS0KPlRvIHVuc3Vic2NyaWJlIGZyb20g dGhpcyBsaXN0OiBz
ZW5kIHRoZSBsaW5lICJ1bnN1YnNjcmliZSBsaW51eC1yYWlkIiBpbgo+dGhl IGJvZHkgb2YgYSBt
ZXNzYWdlIHRvIG1ham9yZG9tb0B2Z2VyLmtlcm5lbC5vcmcKPk1vcmUgbWFq b3Jkb21vIGluZm8g
YXQgIGh0dHA6Ly92Z2VyLmtlcm5lbC5vcmcvbWFqb3Jkb21vLWluZm8uaHRt bAo=
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html