RAID6 issues

RAID6 issues

am 13.09.2011 08:14:51 von Andriano

Hello Linux-RAID mailing list,

I have an issue with my RAID6 array.
Here goes a short description of the system:

opensuse 11.4
Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
(a432f18) x86_64 x86_64 x86_64 GNU/Linux
Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
connected to the HBA, 2 - motherboard ports

I had some issues with one of the onboard connected disks, so tried to
plug it to different ports, just to eliminate possibly faulty port.
After reboot, suddenly other drives got kicked out from the array.
Re-assembling them gives weird errors.

--- some output ---
[3:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdb
[5:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdc
[8:0:0:0] disk ATA ST32000542AS CC34 /dev/sdd
[8:0:1:0] disk ATA ST32000542AS CC34 /dev/sde
[8:0:2:0] disk ATA ST32000542AS CC34 /dev/sdf
[8:0:3:0] disk ATA ST32000542AS CC34 /dev/sdg
[8:0:4:0] disk ATA ST32000542AS CC34 /dev/sdh
[8:0:5:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdi
[8:0:6:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdj
[8:0:7:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdk

#more /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid6 UUID=82ac7386:a854194d:81b795d1:76c9c9ff

#mdadm --assemble --force --scan /dev/md0
mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.

dmesg:
[ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importing!
[ 8215.651865] md: md_import_device returned -22
[ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importing!
[ 8215.652388] md: md_import_device returned -22
[ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importing!
[ 8215.653182] md: md_import_device returned -22

mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
UUID for every disk, all checksums are correct,
the only difference is - Avail Dev Size : 3907028896 is the same for
9 disks, and 3907028864 for sdc

mdadm --assemble --force --update summaries /dev/sd.. - didn't improve anything


I would really appreciate if someone could point me to the right direction.

thanks

Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 08:25:11 von NeilBrown

--Sig_/Br3gJX32xI6SdYJUlx_toiz
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 16:14:51 +1000 Andriano wrote:

> Hello Linux-RAID mailing list,
>=20
> I have an issue with my RAID6 array.
> Here goes a short description of the system:
>=20
> opensuse 11.4
> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> connected to the HBA, 2 - motherboard ports
>=20
> I had some issues with one of the onboard connected disks, so tried to
> plug it to different ports, just to eliminate possibly faulty port.
> After reboot, suddenly other drives got kicked out from the array.
> Re-assembling them gives weird errors.
>=20
> --- some output ---
> [3:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdb
> [5:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdc
> [8:0:0:0] disk ATA ST32000542AS CC34 /dev/sdd
> [8:0:1:0] disk ATA ST32000542AS CC34 /dev/sde
> [8:0:2:0] disk ATA ST32000542AS CC34 /dev/sdf
> [8:0:3:0] disk ATA ST32000542AS CC34 /dev/sdg
> [8:0:4:0] disk ATA ST32000542AS CC34 /dev/sdh
> [8:0:5:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdi
> [8:0:6:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdj
> [8:0:7:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdk
>=20
> #more /etc/mdadm.conf
> DEVICE partitions
> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:76c9c9ff
>=20
> #mdadm --assemble --force --scan /dev/md0
> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> mdadm: /dev/md0 assembled from 7 drives - not enough to start the array.
>=20
> dmesg:
> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not importi=
ng!
> [ 8215.651865] md: md_import_device returned -22
> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not importi=
ng!
> [ 8215.652388] md: md_import_device returned -22
> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not importi=
ng!
> [ 8215.653182] md: md_import_device returned -22
>=20
> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> UUID for every disk, all checksums are correct,
> the only difference is - Avail Dev Size : 3907028896 is the same for
> 9 disks, and 3907028864 for sdc

Please provide that output so we can see it too - it might be helpful.

NeilBrown

>=20
> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve an=
ything
>=20
>=20
> I would really appreciate if someone could point me to the right directio=
n.
>=20
> thanks
>=20
> Andrew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


--Sig_/Br3gJX32xI6SdYJUlx_toiz
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iD8DBQFObvdHG5fc6gV+Wb0RAu05AJ9po9M+QKth/tCJbWXXDF5q8uW72ACe L28L
ZitybSSnv3QpH0w/0HxYhh4=
=48q6
-----END PGP SIGNATURE-----

--Sig_/Br3gJX32xI6SdYJUlx_toiz--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 08:33:36 von Andriano

>
>> Hello Linux-RAID mailing list,
>>
>> I have an issue with my RAID6 array.
>> Here goes a short description of the system:
>>
>> opensuse 11.4
>> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>> connected to the HBA, 2 - motherboard ports
>>
>> I had some issues with one of the onboard connected disks, so tried =
to
>> plug it to different ports, just to eliminate possibly faulty port.
>> After reboot, suddenly other drives got kicked out from the array.
>> Re-assembling them gives weird errors.
>>
>> --- some output ---
>> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0=
/dev/sdb
>> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0=
/dev/sdc
>> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC3=
4 =A0/dev/sdd
>> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC3=
4 =A0/dev/sde
>> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC3=
4 =A0/dev/sdf
>> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC3=
4 =A0/dev/sdg
>> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC3=
4 =A0/dev/sdh
>> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0=
/dev/sdi
>> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0=
/dev/sdj
>> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0=
/dev/sdk
>>
>> #more /etc/mdadm.conf
>> DEVICE partitions
>> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:76c9c=
9ff
>>
>> #mdadm --assemble --force --scan /dev/md0
>> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> mdadm: /dev/md0 assembled from 7 drives - not enough to start the ar=
ray.
>>
>> dmesg:
>> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not im=
porting!
>> [ 8215.651865] md: md_import_device returned -22
>> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not im=
porting!
>> [ 8215.652388] md: md_import_device returned -22
>> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not im=
porting!
>> [ 8215.653182] md: md_import_device returned -22
>>
>> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
>> UUID for every disk, all checksums are correct,
>> the only difference is - =A0Avail Dev Size : 3907028896 is the same =
for
>> 9 disks, and 3907028864 for sdc
>
> Please provide that output so we can see it too - it might be helpful=

>
> NeilBrown


# mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
mdadm: --update=3Dsummaries not understood for 1.x metadata


>
>>
>> mdadm --assemble --force --update summaries /dev/sd.. - didn't impro=
ve anything
>>
>>
>> I would really appreciate if someone could point me to the right dir=
ection.
>>
>> thanks
>>
>> Andrew
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 08:44:19 von NeilBrown

--Sig_/8iJekUcK/Z0O9AeIbfEzZjS
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 16:33:36 +1000 Andriano wrote:

> >
> >> Hello Linux-RAID mailing list,
> >>
> >> I have an issue with my RAID6 array.
> >> Here goes a short description of the system:
> >>
> >> opensuse 11.4
> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> >> connected to the HBA, 2 - motherboard ports
> >>
> >> I had some issues with one of the onboard connected disks, so tried to
> >> plug it to different ports, just to eliminate possibly faulty port.
> >> After reboot, suddenly other drives got kicked out from the array.
> >> Re-assembling them gives weird errors.
> >>
> >> --- some output ---
> >> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0/=
dev/sdb
> >> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0/=
dev/sdc
> >> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC34 =
=A0/dev/sdd
> >> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC34 =
=A0/dev/sde
> >> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC34 =
=A0/dev/sdf
> >> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC34 =
=A0/dev/sdg
> >> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC34 =
=A0/dev/sdh
> >> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0/=
dev/sdi
> >> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0/=
dev/sdj
> >> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =A0/=
dev/sdk
> >>
> >> #more /etc/mdadm.conf
> >> DEVICE partitions
> >> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:76c9c9ff
> >>
> >> #mdadm --assemble --force --scan /dev/md0
> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the arra=
y.
> >>
> >> dmesg:
> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not impo=
rting!
> >> [ 8215.651865] md: md_import_device returned -22
> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not impo=
rting!
> >> [ 8215.652388] md: md_import_device returned -22
> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not impo=
rting!
> >> [ 8215.653182] md: md_import_device returned -22
> >>
> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> >> UUID for every disk, all checksums are correct,
> >> the only difference is - =A0Avail Dev Size : 3907028896 is the same for
> >> 9 disks, and 3907028864 for sdc
> >
> > Please provide that output so we can see it too - it might be helpful.
> >
> > NeilBrown
>=20
>=20
> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> mdadm: --update=3Dsummaries not understood for 1.x metadata
>=20

Sorry - I was too terse.

I meant that output of "mdadm -E ...."

NeilBrown


>=20
> >
> >>
> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't improve=
anything
> >>
> >>
> >> I would really appreciate if someone could point me to the right direc=
tion.
> >>
> >> thanks
> >>
> >> Andrew
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" =
in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
> >
> >


--Sig_/8iJekUcK/Z0O9AeIbfEzZjS
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iD8DBQFObvvDG5fc6gV+Wb0RAq3mAJ40y8Qid83cVumYpWATqfe9vm9SBQCg 10a2
NYeg04N5vWk7gPiM2NC0dTI=
=356Q
-----END PGP SIGNATURE-----

--Sig_/8iJekUcK/Z0O9AeIbfEzZjS--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 09:05:06 von Andriano

On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown wrote:
> On Tue, 13 Sep 2011 16:33:36 +1000 Andriano wrot=
e:
>
>> >
>> >> Hello Linux-RAID mailing list,
>> >>
>> >> I have an issue with my RAID6 array.
>> >> Here goes a short description of the system:
>> >>
>> >> opensuse 11.4
>> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of t=
hem
>> >> connected to the HBA, 2 - motherboard ports
>> >>
>> >> I had some issues with one of the onboard connected disks, so tri=
ed to
>> >> plug it to different ports, just to eliminate possibly faulty por=
t.
>> >> After reboot, suddenly other drives got kicked out from the array=

>> >> Re-assembling them gives weird errors.
>> >>
>> >> --- some output ---
>> >> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32=
=A0/dev/sdb
>> >> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32=
=A0/dev/sdc
>> >> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 =
CC34 =A0/dev/sdd
>> >> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 =
CC34 =A0/dev/sde
>> >> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 =
CC34 =A0/dev/sdf
>> >> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 =
CC34 =A0/dev/sdg
>> >> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 =
CC34 =A0/dev/sdh
>> >> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32=
=A0/dev/sdi
>> >> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32=
=A0/dev/sdj
>> >> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32=
=A0/dev/sdk
>> >>
>> >> #more /etc/mdadm.conf
>> >> DEVICE partitions
>> >> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:76=
c9c9ff
>> >>
>> >> #mdadm --assemble --force --scan /dev/md0
>> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the=
array.
>> >>
>> >> dmesg:
>> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not=
importing!
>> >> [ 8215.651865] md: md_import_device returned -22
>> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not=
importing!
>> >> [ 8215.652388] md: md_import_device returned -22
>> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not=
importing!
>> >> [ 8215.653182] md: md_import_device returned -22
>> >>
>> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Ar=
ray
>> >> UUID for every disk, all checksums are correct,
>> >> the only difference is - =A0Avail Dev Size : 3907028896 is the sa=
me for
>> >> 9 disks, and 3907028864 for sdc
>> >
>> > Please provide that output so we can see it too - it might be help=
ful.
>> >
>> > NeilBrown
>>
>>
>> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
>> mdadm: --update=3Dsummaries not understood for 1.x metadata
>>
>
> Sorry - I was too terse.
>
> I meant that output of "mdadm -E ...."
>
> NeilBrown
>
>
>>
>> >
>> >>
>> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't im=
prove anything
>> >>
>> >>
>> >> I would really appreciate if someone could point me to the right =
direction.
>> >>
>> >> thanks
>> >>
>> >> Andrew
>> >> --
>> >> To unsubscribe from this list: send the line "unsubscribe linux-r=
aid" in
>> >> the body of a message to majordomo@vger.kernel.org
>> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.h=
tml
>> >
>> >
>
>

/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : active
Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b

Update Time : Mon Sep 12 22:36:35 2011
Checksum : 205f92e1 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 6
Array State : AAAAAAAAAA ('A' == active, '.' == missing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Data Offset : 304 sectors
Super Offset : 8 sectors
State : clean
Device UUID : afa2f348:88bd0376:29bcfe96:df32a522

Update Time : Tue Sep 13 11:50:18 2011
Checksum : ee1facae - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 5
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdd:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d1a7cfca:a4d7aef7:47b6d3c6:82d1da5b

Update Time : Tue Sep 13 11:50:18 2011
Checksum : 5ab164a8 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 3
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : ba9497e9:7665c161:1e596d49:8a642880

Update Time : Tue Sep 13 11:50:18 2011
Checksum : 8a731bdf - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 1
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8d503057:423c455d:665af78a:093b99b8

Update Time : Tue Sep 13 11:50:18 2011
Checksum : 6d8a7fa6 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 2
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 7d0284af:74ceb0e9:31eab49e:d9fedff5

Update Time : Tue Sep 13 11:50:18 2011
Checksum : a34e1766 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 4
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdh:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : a97e691f:f81bb643:9cedde86:87f9bc69

Update Time : Tue Sep 13 11:50:18 2011
Checksum : c947df28 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 0
Array State : AAAAAAAAAA ('A' == active, '.' == missing)
/dev/sdi:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : c5970279:a68c84f0:a5803880:91f69e74

Update Time : Tue Sep 13 11:50:18 2011
Checksum : d3e2fa15 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 8
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdj:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : a62447a6:604c0917:1ab4d073:2ca99f8f

Update Time : Tue Sep 13 11:50:18 2011
Checksum : 36452bba - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 7
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
/dev/sdk:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
Name : hnas:0 (local to host hnas)
Creation Time : Wed Jan 19 21:17:33 2011
Raid Level : raid6
Raid Devices : 10

Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : ac64f1b6:578eb873:9f28bbd4:8abc61b3

Update Time : Tue Sep 13 11:50:18 2011
Checksum : 284b3598 - correct
Events : 6446662

Layout : left-symmetric
Chunk Size : 64K

Device Role : Active device 9
Array State : AAAAAA.AAA ('A' == active, '.' == missing)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 09:38:50 von NeilBrown

--Sig_/XfAj2H_1=1CSyo21ve1DWcY
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 17:05:06 +1000 Andriano wrote:

> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown wrote:
> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano wrote:
> >
> >> >
> >> >> Hello Linux-RAID mailing list,
> >> >>
> >> >> I have an issue with my RAID6 array.
> >> >> Here goes a short description of the system:
> >> >>
> >> >> opensuse 11.4
> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> >> >> connected to the HBA, 2 - motherboard ports
> >> >>
> >> >> I had some issues with one of the onboard connected disks, so tried=
to
> >> >> plug it to different ports, just to eliminate possibly faulty port.
> >> >> After reboot, suddenly other drives got kicked out from the array.
> >> >> Re-assembling them gives weird errors.
> >> >>
> >> >> --- some output ---
> >> >> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =
=A0/dev/sdb
> >> >> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =
=A0/dev/sdc
> >> >> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC=
34 =A0/dev/sdd
> >> >> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC=
34 =A0/dev/sde
> >> >> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC=
34 =A0/dev/sdf
> >> >> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC=
34 =A0/dev/sdg
> >> >> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0 CC=
34 =A0/dev/sdh
> >> >> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =
=A0/dev/sdi
> >> >> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =
=A0/dev/sdj
> >> >> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC32 =
=A0/dev/sdk
> >> >>
> >> >> #more /etc/mdadm.conf
> >> >> DEVICE partitions
> >> >> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:76c9=
c9ff
> >> >>
> >> >> #mdadm --assemble --force --scan /dev/md0
> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start the a=
rray.
> >> >>
> >> >> dmesg:
> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, not i=
mporting!
> >> >> [ 8215.651865] md: md_import_device returned -22
> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, not i=
mporting!
> >> >> [ 8215.652388] md: md_import_device returned -22
> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, not i=
mporting!
> >> >> [ 8215.653182] md: md_import_device returned -22
> >> >>
> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and Array
> >> >> UUID for every disk, all checksums are correct,
> >> >> the only difference is - =A0Avail Dev Size : 3907028896 is the same=
for
> >> >> 9 disks, and 3907028864 for sdc
> >> >
> >> > Please provide that output so we can see it too - it might be helpfu=
l.
> >> >
> >> > NeilBrown
> >>
> >>
> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> >> mdadm: --update=3Dsummaries not understood for 1.x metadata
> >>
> >
> > Sorry - I was too terse.
> >
> > I meant that output of "mdadm -E ...."
> >
> > NeilBrown
> >
> >
> >>
> >> >
> >> >>
> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't impr=
ove anything
> >> >>
> >> >>
> >> >> I would really appreciate if someone could point me to the right di=
rection.
> >> >>
> >> >> thanks
> >> >>
> >> >> Andrew
> >> >> --
> >> >> To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
> >> >> the body of a message to majordomo@vger.kernel.org
> >> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
> >> >
> >> >
> >
> >
>=20
> /dev/sdb:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> Name : hnas:0 (local to host hnas)
> Creation Time : Wed Jan 19 21:17:33 2011
> Raid Level : raid6
> Raid Devices : 10
>=20
> Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
> Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> Data Offset : 272 sectors
> Super Offset : 8 sectors
> State : active
> Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
>=20
> Update Time : Mon Sep 12 22:36:35 2011
> Checksum : 205f92e1 - correct
> Events : 6446662
>=20
> Layout : left-symmetric
> Chunk Size : 64K
>=20
> Device Role : Active device 6
> Array State : AAAAAAAAAA ('A' == active, '.' == missing)
> /dev/sdc:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> Name : hnas:0 (local to host hnas)
> Creation Time : Wed Jan 19 21:17:33 2011
> Raid Level : raid6
> Raid Devices : 10
>=20
> Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> Data Offset : 304 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
>=20
> Update Time : Tue Sep 13 11:50:18 2011
> Checksum : ee1facae - correct
> Events : 6446662
>=20
> Layout : left-symmetric
> Chunk Size : 64K
>=20
> Device Role : Active device 5
> Array State : AAAAAA.AAA ('A' == active, '.' == missing)
(snip)

Thanks.

The only explanation I can come up with is that the devices appear to be
smaller for some reason.
Can you run
blockdev --getsz /dev/sd?

and report the result?
They should all be 3907029168 (Data Offset + Avail Dev Size).
If any are smaller - that is the problem.

NeilBrown


--Sig_/XfAj2H_1=1CSyo21ve1DWcY
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iD8DBQFObwiKG5fc6gV+Wb0RAtLRAKDUdZQECaSKPYeAZrmNOp8VifBucQCg rYgG
bN52xRG9L5Zb5VBrn9UiPGQ=
=QJMj
-----END PGP SIGNATURE-----

--Sig_/XfAj2H_1=1CSyo21ve1DWcY--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 09:51:56 von Andriano

On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown wrote:
> On Tue, 13 Sep 2011 17:05:06 +1000 Andriano wrot=
e:
>
>> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown wrote:
>> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano w=
rote:
>> >
>> >> >
>> >> >> Hello Linux-RAID mailing list,
>> >> >>
>> >> >> I have an issue with my RAID6 array.
>> >> >> Here goes a short description of the system:
>> >> >>
>> >> >> opensuse 11.4
>> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2=
011
>> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.=
21
>> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 o=
f them
>> >> >> connected to the HBA, 2 - motherboard ports
>> >> >>
>> >> >> I had some issues with one of the onboard connected disks, so =
tried to
>> >> >> plug it to different ports, just to eliminate possibly faulty =
port.
>> >> >> After reboot, suddenly other drives got kicked out from the ar=
ray.
>> >> >> Re-assembling them gives weird errors.
>> >> >>
>> >> >> --- some output ---
>> >> >> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 C=
C32 =A0/dev/sdb
>> >> >> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 C=
C32 =A0/dev/sdc
>> >> >> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdd
>> >> >> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sde
>> >> >> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdf
>> >> >> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdg
>> >> >> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdh
>> >> >> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 C=
C32 =A0/dev/sdi
>> >> >> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 C=
C32 =A0/dev/sdj
>> >> >> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 C=
C32 =A0/dev/sdk
>> >> >>
>> >> >> #more /etc/mdadm.conf
>> >> >> DEVICE partitions
>> >> >> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1=
:76c9c9ff
>> >> >>
>> >> >> #mdadm --assemble --force --scan /dev/md0
>> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
>> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
>> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
>> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start =
the array.
>> >> >>
>> >> >> dmesg:
>> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, =
not importing!
>> >> >> [ 8215.651865] md: md_import_device returned -22
>> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, =
not importing!
>> >> >> [ 8215.652388] md: md_import_device returned -22
>> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, =
not importing!
>> >> >> [ 8215.653182] md: md_import_device returned -22
>> >> >>
>> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and=
Array
>> >> >> UUID for every disk, all checksums are correct,
>> >> >> the only difference is - =A0Avail Dev Size : 3907028896 is the=
same for
>> >> >> 9 disks, and 3907028864 for sdc
>> >> >
>> >> > Please provide that output so we can see it too - it might be h=
elpful.
>> >> >
>> >> > NeilBrown
>> >>
>> >>
>> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
>> >> mdadm: --update=3Dsummaries not understood for 1.x metadata
>> >>
>> >
>> > Sorry - I was too terse.
>> >
>> > I meant that output of "mdadm -E ...."
>> >
>> > NeilBrown
>> >
>> >
>> >>
>> >> >
>> >> >>
>> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't=
improve anything
>> >> >>
>> >> >>
>> >> >> I would really appreciate if someone could point me to the rig=
ht direction.
>> >> >>
>> >> >> thanks
>> >> >>
>> >> >> Andrew
>> >> >> --
>> >> >> To unsubscribe from this list: send the line "unsubscribe linu=
x-raid" in
>> >> >> the body of a message to majordomo@vger.kernel.org
>> >> >> More majordomo info at =A0http://vger.kernel.org/majordomo-inf=
o.html
>> >> >
>> >> >
>> >
>> >
>>
>> /dev/sdb:
>> =A0 =A0 =A0 =A0 =A0 Magic : a92b4efc
>> =A0 =A0 =A0 =A0 Version : 1.2
>> =A0 =A0 Feature Map : 0x0
>> =A0 =A0 =A0Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>> =A0 =A0 =A0 =A0 =A0 =A0Name : hnas:0 =A0(local to host hnas)
>> =A0 Creation Time : Wed Jan 19 21:17:33 2011
>> =A0 =A0 =A0Raid Level : raid6
>> =A0 =A0Raid Devices : 10
>>
>> =A0Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
>> =A0 =A0 =A0Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>> =A0 Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>> =A0 =A0 Data Offset : 272 sectors
>> =A0 =A0Super Offset : 8 sectors
>> =A0 =A0 =A0 =A0 =A0 State : active
>> =A0 =A0 Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
>>
>> =A0 =A0 Update Time : Mon Sep 12 22:36:35 2011
>> =A0 =A0 =A0 =A0Checksum : 205f92e1 - correct
>> =A0 =A0 =A0 =A0 =A0Events : 6446662
>>
>> =A0 =A0 =A0 =A0 =A0Layout : left-symmetric
>> =A0 =A0 =A0Chunk Size : 64K
>>
>> =A0 =A0Device Role : Active device 6
>> =A0 =A0Array State : AAAAAAAAAA ('A' == active, '.' == missi=
ng)
>> /dev/sdc:
>> =A0 =A0 =A0 =A0 =A0 Magic : a92b4efc
>> =A0 =A0 =A0 =A0 Version : 1.2
>> =A0 =A0 Feature Map : 0x0
>> =A0 =A0 =A0Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
>> =A0 =A0 =A0 =A0 =A0 =A0Name : hnas:0 =A0(local to host hnas)
>> =A0 Creation Time : Wed Jan 19 21:17:33 2011
>> =A0 =A0 =A0Raid Level : raid6
>> =A0 =A0Raid Devices : 10
>>
>> =A0Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
>> =A0 =A0 =A0Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
>> =A0 =A0 Data Offset : 304 sectors
>> =A0 =A0Super Offset : 8 sectors
>> =A0 =A0 =A0 =A0 =A0 State : clean
>> =A0 =A0 Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
>>
>> =A0 =A0 Update Time : Tue Sep 13 11:50:18 2011
>> =A0 =A0 =A0 =A0Checksum : ee1facae - correct
>> =A0 =A0 =A0 =A0 =A0Events : 6446662
>>
>> =A0 =A0 =A0 =A0 =A0Layout : left-symmetric
>> =A0 =A0 =A0Chunk Size : 64K
>>
>> =A0 =A0Device Role : Active device 5
>> =A0 =A0Array State : AAAAAA.AAA ('A' == active, '.' == missi=
ng)
> (snip)
>
> Thanks.
>
> The only explanation I can come up with is that the devices appear to=
be
> smaller for some reason.
> Can you run
> =A0blockdev --getsz /dev/sd?
>
> and report the result?
> They should all be 3907029168 (Data Offset + Avail Dev Size).
> If any are smaller - that is the problem.
>
> NeilBrown
>
>

Apparently you're right
blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
/dev/sdh /dev/sdi /dev/sdj /dev/sdk
3907027055
3907027055
3907029168
3907029168
3907029168
3907029168
3907027055
3907029168
3907029168
3907029168

sdb, sdc and sdh - are smaller and they are problem disks

So what would be a solution to fix this issue?

thanks
Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 10:10:31 von NeilBrown

--Sig_/1fSSrqS3Y=cuplNoEH5XIDP
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 17:51:56 +1000 Andriano wrote:

> On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown wrote:
> > On Tue, 13 Sep 2011 17:05:06 +1000 Andriano wrote:
> >
> >> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown wrote:
> >> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano wro=
te:
> >> >
> >> >> >
> >> >> >> Hello Linux-RAID mailing list,
> >> >> >>
> >> >> >> I have an issue with my RAID6 array.
> >> >> >> Here goes a short description of the system:
> >> >> >>
> >> >> >> opensuse 11.4
> >> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> >> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> >> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> >> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> >> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of =
them
> >> >> >> connected to the HBA, 2 - motherboard ports
> >> >> >>
> >> >> >> I had some issues with one of the onboard connected disks, so tr=
ied to
> >> >> >> plug it to different ports, just to eliminate possibly faulty po=
rt.
> >> >> >> After reboot, suddenly other drives got kicked out from the arra=
y.
> >> >> >> Re-assembling them gives weird errors.
> >> >> >>
> >> >> >> --- some output ---
> >> >> >> [3:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC3=
2 =A0/dev/sdb
> >> >> >> [5:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC3=
2 =A0/dev/sdc
> >> >> >> [8:0:0:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdd
> >> >> >> [8:0:1:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sde
> >> >> >> [8:0:2:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdf
> >> >> >> [8:0:3:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdg
> >> >> >> [8:0:4:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST32000542AS =A0 =A0=
CC34 =A0/dev/sdh
> >> >> >> [8:0:5:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC3=
2 =A0/dev/sdi
> >> >> >> [8:0:6:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC3=
2 =A0/dev/sdj
> >> >> >> [8:0:7:0] =A0 =A0disk =A0 =A0ATA =A0 =A0 =A0ST2000DL003-9VT1 CC3=
2 =A0/dev/sdk
> >> >> >>
> >> >> >> #more /etc/mdadm.conf
> >> >> >> DEVICE partitions
> >> >> >> ARRAY /dev/md0 level=3Draid6 UUID=3D82ac7386:a854194d:81b795d1:7=
6c9c9ff
> >> >> >>
> >> >> >> #mdadm --assemble --force --scan /dev/md0
> >> >> >> mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
> >> >> >> mdadm: failed to add /dev/sdb to /dev/md0: Invalid argument
> >> >> >> mdadm: failed to add /dev/sdh to /dev/md0: Invalid argument
> >> >> >> mdadm: /dev/md0 assembled from 7 drives - not enough to start th=
e array.
> >> >> >>
> >> >> >> dmesg:
> >> >> >> [ 8215.651860] md: sdc does not have a valid v1.2 superblock, no=
t importing!
> >> >> >> [ 8215.651865] md: md_import_device returned -22
> >> >> >> [ 8215.652384] md: sdb does not have a valid v1.2 superblock, no=
t importing!
> >> >> >> [ 8215.652388] md: md_import_device returned -22
> >> >> >> [ 8215.653177] md: sdh does not have a valid v1.2 superblock, no=
t importing!
> >> >> >> [ 8215.653182] md: md_import_device returned -22
> >> >> >>
> >> >> >> mdadm -E /dev/sd[b..k] gives exactly the same Magic number and A=
rray
> >> >> >> UUID for every disk, all checksums are correct,
> >> >> >> the only difference is - =A0Avail Dev Size : 3907028896 is the s=
ame for
> >> >> >> 9 disks, and 3907028864 for sdc
> >> >> >
> >> >> > Please provide that output so we can see it too - it might be hel=
pful.
> >> >> >
> >> >> > NeilBrown
> >> >>
> >> >>
> >> >> # mdadm --assemble --force --update summaries /dev/md0 /dev/sdc
> >> >> mdadm: --update=3Dsummaries not understood for 1.x metadata
> >> >>
> >> >
> >> > Sorry - I was too terse.
> >> >
> >> > I meant that output of "mdadm -E ...."
> >> >
> >> > NeilBrown
> >> >
> >> >
> >> >>
> >> >> >
> >> >> >>
> >> >> >> mdadm --assemble --force --update summaries /dev/sd.. - didn't i=
mprove anything
> >> >> >>
> >> >> >>
> >> >> >> I would really appreciate if someone could point me to the right=
direction.
> >> >> >>
> >> >> >> thanks
> >> >> >>
> >> >> >> Andrew
> >> >> >> --
> >> >> >> To unsubscribe from this list: send the line "unsubscribe linux-=
raid" in
> >> >> >> the body of a message to majordomo@vger.kernel.org
> >> >> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.=
html
> >> >> >
> >> >> >
> >> >
> >> >
> >>
> >> /dev/sdb:
> >> =A0 =A0 =A0 =A0 =A0 Magic : a92b4efc
> >> =A0 =A0 =A0 =A0 Version : 1.2
> >> =A0 =A0 Feature Map : 0x0
> >> =A0 =A0 =A0Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> >> =A0 =A0 =A0 =A0 =A0 =A0Name : hnas:0 =A0(local to host hnas)
> >> =A0 Creation Time : Wed Jan 19 21:17:33 2011
> >> =A0 =A0 =A0Raid Level : raid6
> >> =A0 =A0Raid Devices : 10
> >>
> >> =A0Avail Dev Size : 3907028896 (1863.02 GiB 2000.40 GB)
> >> =A0 =A0 =A0Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> >> =A0 Used Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> >> =A0 =A0 Data Offset : 272 sectors
> >> =A0 =A0Super Offset : 8 sectors
> >> =A0 =A0 =A0 =A0 =A0 State : active
> >> =A0 =A0 Device UUID : 4b31edb8:531a4c14:50c954a2:8eda453b
> >>
> >> =A0 =A0 Update Time : Mon Sep 12 22:36:35 2011
> >> =A0 =A0 =A0 =A0Checksum : 205f92e1 - correct
> >> =A0 =A0 =A0 =A0 =A0Events : 6446662
> >>
> >> =A0 =A0 =A0 =A0 =A0Layout : left-symmetric
> >> =A0 =A0 =A0Chunk Size : 64K
> >>
> >> =A0 =A0Device Role : Active device 6
> >> =A0 =A0Array State : AAAAAAAAAA ('A' == active, '.' == missing)
> >> /dev/sdc:
> >> =A0 =A0 =A0 =A0 =A0 Magic : a92b4efc
> >> =A0 =A0 =A0 =A0 Version : 1.2
> >> =A0 =A0 Feature Map : 0x0
> >> =A0 =A0 =A0Array UUID : 82ac7386:a854194d:81b795d1:76c9c9ff
> >> =A0 =A0 =A0 =A0 =A0 =A0Name : hnas:0 =A0(local to host hnas)
> >> =A0 Creation Time : Wed Jan 19 21:17:33 2011
> >> =A0 =A0 =A0Raid Level : raid6
> >> =A0 =A0Raid Devices : 10
> >>
> >> =A0Avail Dev Size : 3907028864 (1863.02 GiB 2000.40 GB)
> >> =A0 =A0 =A0Array Size : 31256230912 (14904.13 GiB 16003.19 GB)
> >> =A0 =A0 Data Offset : 304 sectors
> >> =A0 =A0Super Offset : 8 sectors
> >> =A0 =A0 =A0 =A0 =A0 State : clean
> >> =A0 =A0 Device UUID : afa2f348:88bd0376:29bcfe96:df32a522
> >>
> >> =A0 =A0 Update Time : Tue Sep 13 11:50:18 2011
> >> =A0 =A0 =A0 =A0Checksum : ee1facae - correct
> >> =A0 =A0 =A0 =A0 =A0Events : 6446662
> >>
> >> =A0 =A0 =A0 =A0 =A0Layout : left-symmetric
> >> =A0 =A0 =A0Chunk Size : 64K
> >>
> >> =A0 =A0Device Role : Active device 5
> >> =A0 =A0Array State : AAAAAA.AAA ('A' == active, '.' == missing)
> > (snip)
> >
> > Thanks.
> >
> > The only explanation I can come up with is that the devices appear to be
> > smaller for some reason.
> > Can you run
> > =A0blockdev --getsz /dev/sd?
> >
> > and report the result?
> > They should all be 3907029168 (Data Offset + Avail Dev Size).
> > If any are smaller - that is the problem.
> >
> > NeilBrown
> >
> >
>=20
> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
>=20
> sdb, sdc and sdh - are smaller and they are problem disks
>=20
> So what would be a solution to fix this issue?
>

I'm afraid I cannot really help there.
The disks must have been bigger before else they could never have been
members of the array.
Maybe some jumper was changed? Maybe a different controller hides some
sectors?
I really don't know the details of what can cause this.
Maybe try changing things until you see a pattern.
If you move devices between controller, so the small size move with the
device, or does it stay with the controller? That sort of thing.

NeilBrown


--Sig_/1fSSrqS3Y=cuplNoEH5XIDP
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iD8DBQFObw/3G5fc6gV+Wb0RAu4VAKCe7ai+rmLaGstOE6OV5I6O0d3axACg l0hX
XT+lyoo3+YpLneiYKzezw8M=
=q4kT
-----END PGP SIGNATURE-----

--Sig_/1fSSrqS3Y=cuplNoEH5XIDP--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 10:12:56 von alexander.kuehn

Zitat von Andriano :

> On Tue, Sep 13, 2011 at 5:38 PM, NeilBrown wrote:
>> On Tue, 13 Sep 2011 17:05:06 +1000 Andriano wrote:
>>
>>> On Tue, Sep 13, 2011 at 4:44 PM, NeilBrown wrote:
>>> > On Tue, 13 Sep 2011 16:33:36 +1000 Andriano wrote:
>>> >
>>> >> >
>>> >> >> Hello Linux-RAID mailing list,
>>> >> >>
>>> >> >> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
>>> >> >> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
>>> >> >> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
>>> >> >> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
>>> >> >> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
>>> >> >> connected to the HBA, 2 - motherboard ports
>>> >> >>
>>> >> >> I had some issues with one of the onboard connected disks,
>>> so tried to
>>> >> >> plug it to different ports, just to eliminate possibly faulty port.
>>> >> >> After reboot, suddenly other drives got kicked out from the array.
>>> >> >> Re-assembling them gives weird errors.
>>> >> >>
> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
>
> sdb, sdc and sdh - are smaller and they are problem disks
>
> So what would be a solution to fix this issue?

The solution seems obvious:
Plug them (or at least one of them) back into the original ports so
that they regain their original size.
Then you can try and shrink your filesystem/logical volumes and then
the array, then check everything is working (do a raid check too).
Then you can move one one the disks to a good port, zero the metadata
on it and add it back to regain full redundancy. Once done, move the
next...
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 10:44:24 von Roman Mamedov

--Sig_/x/daAc6SRyMRbt7ZZogIqdF
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 17:51:56 +1000
Andriano wrote:

> Apparently you're right
> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
> 3907027055
> 3907027055
> 3907029168
> 3907029168
> 3907029168
> 3907029168
> 3907027055
> 3907029168
> 3907029168
> 3907029168
>=20
> sdb, sdc and sdh - are smaller and they are problem disks
>=20
> So what would be a solution to fix this issue?

You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are k=
nown to cut off about 1 MByte or so from the end of HDDs (on the onboard co=
ntroller, and maybe just the one on Port 0), setting an HPA area and storin=
g a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIO=
S". Google for "gigabyte bios hpa" and you'll find a lot of reports about t=
his problem. You can check if you can disable that "feature" in BIOS setup,=
but older boards did not have such option.

To restore the native capacity of the drives you can use "hdparm -N" (see i=
ts man page), while disks are on the non-onboard controller.

In the future, create your RAID from partitions, and leave 8-10 MB of space=
in the end of each disk for cases like these.

--=20
With respect,
Roman

--Sig_/x/daAc6SRyMRbt7ZZogIqdF
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk5vF+gACgkQTLKSvz+PZwjikwCeK80Y2/KM5kcozvPzNGE7 ioCJ
Z90AnjEjdOprgZskdc2/PCZYUpBT7bjo
=bC4B
-----END PGP SIGNATURE-----

--Sig_/x/daAc6SRyMRbt7ZZogIqdF--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 10:57:14 von Andriano

On Tue, Sep 13, 2011 at 6:44 PM, Roman Mamedov wrote:
> On Tue, 13 Sep 2011 17:51:56 +1000
> Andriano wrote:
>
>> Apparently you're right
>> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
>> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
>> 3907027055
>> 3907027055
>> 3907029168
>> 3907029168
>> 3907029168
>> 3907029168
>> 3907027055
>> 3907029168
>> 3907029168
>> 3907029168
>>
>> sdb, sdc and sdh - are smaller and they are problem disks
>>
>> So what would be a solution to fix this issue?
>
> You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are known to cut off about 1 MByte or so from the end of HDDs (on the onboard controller, and maybe just the one on Port 0), setting an HPA area and storing a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIOS". Google for "gigabyte bios hpa" and you'll find a lot of reports about this problem. You can check if you can disable that "feature" in BIOS setup, but older boards did not have such option.
>
> To restore the native capacity of the drives you can use "hdparm -N" (see its man page), while disks are on the non-onboard controller.
>
> In the future, create your RAID from partitions, and leave 8-10 MB of space in the end of each disk for cases like these.
>
> --
> With respect,
> Roman
>

Roman,

Looks like you have pointed to the source of the problem. The option
to backup BIOS has been enabled.
Is "hdparm -N" going to affect superblock or data integrity of these
disks? Or has that backup already done the damage?

thanks

Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 11:05:41 von Andriano

On Tue, Sep 13, 2011 at 6:57 PM, Andriano wrote:
> On Tue, Sep 13, 2011 at 6:44 PM, Roman Mamedov wrote:
>> On Tue, 13 Sep 2011 17:51:56 +1000
>> Andriano wrote:
>>
>>> Apparently you're right
>>> blockdev --getsz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
>>> /dev/sdh /dev/sdi /dev/sdj /dev/sdk
>>> 3907027055
>>> 3907027055
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>> 3907027055
>>> 3907029168
>>> 3907029168
>>> 3907029168
>>>
>>> sdb, sdc and sdh - are smaller and they are problem disks
>>>
>>> So what would be a solution to fix this issue?
>>
>> You mentioned you use Gigabyte EP35C-DS3 motherboard. Gigabyte BIOSes are known to cut off about 1 MByte or so from the end of HDDs (on the onboard controller, and maybe just the one on Port 0), setting an HPA area and storing a copy of the BIOS there. That's known as "(Virtual) Dual/Triple/Quad BIOS". Google for "gigabyte bios hpa" and you'll find a lot of reports about this problem. You can check if you can disable that "feature" in BIOS setup, but older boards did not have such option.
>>
>> To restore the native capacity of the drives you can use "hdparm -N" (see its man page), while disks are on the non-onboard controller.
>>
>> In the future, create your RAID from partitions, and leave 8-10 MB of space in the end of each disk for cases like these.
>>
>> --
>> With respect,
>> Roman
>>
>
> Roman,
>
> Looks like you have pointed to the source of the problem. The option
> to backup BIOS has been enabled.
> Is "hdparm -N" going to affect superblock or data integrity of these
> disks? Or has that backup already done the damage?
>
> thanks
>
> Andrew
>


Connected one of the offenders to HBA port, and hdparm outputs this:

#hdparm -N /dev/sdh

/dev/sdh:
max sectors = 3907027055/14715056(18446744073321613488?), HPA
setting seems invalid (buggy kernel device driver?)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 12:29:51 von Roman Mamedov

--Sig_/hy0di3LOsiWFpt4noF4qajf
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

On Tue, 13 Sep 2011 19:05:41 +1000
Andriano wrote:

> Connected one of the offenders to HBA port, and hdparm outputs this:
>=20
> #hdparm -N /dev/sdh
>=20
> /dev/sdh:
> max sectors =3D 3907027055/14715056(18446744073321613488?), HPA
> setting seems invalid (buggy kernel device driver?)

You could just try "hdparm -N p3907029168" (capacity of the 'larger' disks)=
, but that could fail if the device driver is indeed buggy.

Another possible course of action would be to try that on some other contro=
ller.
For example on your motherboard you have two violet ports, http://www.gigab=
yte.ru/products/upload/products/1470/100a.jpg
those are managed by the JMicron JMB363 controller, try plugging the disks =
which need HPA to be removed to those ports, AFAIR that JMicron controller =
works with "hdparm -N" just fine.

--=20
With respect,
Roman

--Sig_/hy0di3LOsiWFpt4noF4qajf
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk5vMJ8ACgkQTLKSvz+PZwiojwCfQFqG5p8VPiJ+yt3XIoKe qSUU
hM0An0VioNnw572LI/MQFBkvUI+dsB38
=JKgy
-----END PGP SIGNATURE-----

--Sig_/hy0di3LOsiWFpt4noF4qajf--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 12:44:52 von Andriano

Thanks everyone, looks like the problem is solved.

=46or benefit of others who may experience same issue, here is what I'v=
e done:

- upgraded firmware on ST32000542AS disks - from CC34 to CC35. It must
be done using onboard SATA in Native IDE (not RAID/AHCI) mode.
After reconnecting them back to HBA, size of one of the offenders fixed=
itself!

- ran hdparm -N p3907029168 /dev/sdx command on other two disks and it
worked (probably it works straight after reboot)
Now mdadm -D shows the array as clean, degraded with one disk kicked
out, which is another story :)

now need to resync array and restore two LVs which hasn't mounted :(

On Tue, Sep 13, 2011 at 8:29 PM, Roman Mamedov wrote:
> On Tue, 13 Sep 2011 19:05:41 +1000
> Andriano wrote:
>
>> Connected one of the offenders to HBA port, and hdparm outputs this:
>>
>> #hdparm -N /dev/sdh
>>
>> /dev/sdh:
>> =A0max sectors =A0 =3D 3907027055/14715056(18446744073321613488?), H=
PA
>> setting seems invalid (buggy kernel device driver?)
>
> You could just try "hdparm -N p3907029168" (capacity of the 'larger' =
disks), but that could fail if the device driver is indeed buggy.
>
> Another possible course of action would be to try that on some other =
controller.
> For example on your motherboard you have two violet ports, http://www=
gigabyte.ru/products/upload/products/1470/100a.jpg
> those are managed by the JMicron JMB363 controller, try plugging the =
disks which need HPA to be removed to those ports, AFAIR that JMicron c=
ontroller works with "hdparm -N" just fine.
>
> --
> With respect,
> Roman
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 13.09.2011 15:45:35 von Andriano

Still trying to get the array back up.

Status: Clean, degraded with 9 out of 10 disks.
One disk - removed as non-fresh.

as a result two of LVs could not be mounted:

mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv1=
,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv2=
,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

[ 3357.006833] JBD: no valid journal superblock found
[ 3357.006837] EXT4-fs (dm-1): error loading journal
[ 3357.022603] JBD: no valid journal superblock found
[ 3357.022606] EXT4-fs (dm-2): error loading journal



Apparently there is a problem with re-adding non-fresh disk back to the=
array.

#mdadm -a -v /dev/md0 /dev/sdf
mdadm: /dev/sdf reports being an active member for /dev/md0, but a
--re-add fails.
mdadm: not performing --add as that would convert /dev/sdf in to a spar=
e.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sdf" fir=
st.

Question: Is there a way to resync the array using that non-fresh
disk, as it may contain blocks needed by these LVs.
At this stage I don't really want to add this disk as a spare.

Any suggestions please?


thanks

On Tue, Sep 13, 2011 at 8:44 PM, Andriano wrote:
> Thanks everyone, looks like the problem is solved.
>
> For benefit of others who may experience same issue, here is what I'v=
e done:
>
> - upgraded firmware on ST32000542AS disks - from CC34 to CC35. It mus=
t
> be done using onboard SATA in Native IDE (not RAID/AHCI) mode.
> After reconnecting them back to HBA, size of one of the offenders fix=
ed itself!
>
> - ran hdparm -N p3907029168 /dev/sdx command on other two disks and i=
t
> worked (probably it works straight after reboot)
> Now mdadm -D shows the array as clean, degraded with one disk kicked
> out, which is another story :)
>
> now need to resync array and restore two LVs which hasn't mounted :(
>
> On Tue, Sep 13, 2011 at 8:29 PM, Roman Mamedov wrote:
>> On Tue, 13 Sep 2011 19:05:41 +1000
>> Andriano wrote:
>>
>>> Connected one of the offenders to HBA port, and hdparm outputs this=
:
>>>
>>> #hdparm -N /dev/sdh
>>>
>>> /dev/sdh:
>>> =A0max sectors =A0 =3D 3907027055/14715056(18446744073321613488?), =
HPA
>>> setting seems invalid (buggy kernel device driver?)
>>
>> You could just try "hdparm -N p3907029168" (capacity of the 'larger'=
disks), but that could fail if the device driver is indeed buggy.
>>
>> Another possible course of action would be to try that on some other=
controller.
>> For example on your motherboard you have two violet ports, http://ww=
w.gigabyte.ru/products/upload/products/1470/100a.jpg
>> those are managed by the JMicron JMB363 controller, try plugging the=
disks which need HPA to be removed to those ports, AFAIR that JMicron =
controller works with "hdparm -N" just fine.
>>
>> --
>> With respect,
>> Roman
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 27.09.2011 20:46:10 von Thomas Fjellstrom

On September 13, 2011, Andriano wrote:
> Hello Linux-RAID mailing list,
>
> I have an issue with my RAID6 array.
> Here goes a short description of the system:
>
> opensuse 11.4
> Linux 3.0.4-2-desktop #1 SMP PREEMPT Wed Aug 31 09:30:44 UTC 2011
> (a432f18) x86_64 x86_64 x86_64 GNU/Linux
> Gigabyte EP35C-DS3 motherboard with 8 SATA ports + SuperMicro
> AOC-SASLP-MV8 based on Marvel 6480, firmware updated to 3.1.0.21
> running mdadm 3.2.2, single array consists of 10 2T disks, 8 of them
> connected to the HBA, 2 - motherboard ports

Hi, this is slightly off topic, but I have an AOC-SASLP-MV8 as well, and I'd
suggest swapping it out for a different card. The linux mvsas driver has been
crap for years, and just recently it nearly killed my raid5 array, twice. One
time it needed to resync, the next it first kicked out one drive, then the rest
shortly after. I was not amused.

Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice.
Only slightly more expensive than the SASLP and well supported under linux.

I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-after
I'll be selling my SASLP.

[snip]
>
> Andrew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


--
Thomas Fjellstrom
thomas@fjellstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 27.09.2011 21:14:01 von Stan Hoeppner

On 9/27/2011 1:46 PM, Thomas Fjellstrom wrote:

> Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice.
> Only slightly more expensive than the SASLP and well supported under linux.
>
> I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-after
> I'll be selling my SASLP.

The 9210-8i is a newer generation card using the SAS2008 chip. IOPS
potential is over double that of the 1068E cards, 320M vs 140M, and
you'll have support for drives larger than 2TB. It also supports SATA3
link speed whereas the 1068E chips only support SATA2. It has a PCIe x8
2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
interface for only 4GB/s.

In short, it's has quite a bit more capability than the 1068E based cards.

--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 27.09.2011 23:04:02 von Thomas Fjellstrom

> Stan Hoeppner wrote:
> >On 9/27/2011 1:46 PM, Thomas Fjellstrom wrote:

>> Stan Hoeppner suggested the LSI SAS1068E card, which looks to be very nice.
>> Only slightly more expensive than the SASLP and well supported under linux.
>>
>> I'll hopefully be getting my hands on a LSI 9210-8i soon. Shortly there-
after
>> I'll be selling my SASLP.
>
> The 9210-8i is a newer generation card using the SAS2008 chip. IOPS
> potential is over double that of the 1068E cards, 320M vs 140M, and
> you'll have support for drives larger than 2TB. It also supports SATA3
> link speed whereas the 1068E chips only support SATA2. It has a PCIe x8
> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
> interface for only 4GB/s.
>
> In short, it's has quite a bit more capability than the 1068E based cards.


Yeah, i was impressed by the claimed specs. I bet if i knew how much it sells
for, i'd be shocked. I did a little searching but didn't have much luck.

> --
> Stan

p.s. Sory for the duplicate Stan, I couldn't figure out how to disable html on
my android mail client, and linux-raid bounced it.

--
Thomas Fjellstrom
thomas@fjellstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 28.09.2011 04:47:11 von Stan Hoeppner

On 9/27/2011 4:04 PM, Thomas Fjellstrom wrote:
>> Stan Hoeppner wrote:

>> The 9210-8i is a newer generation card using the SAS2008 chip. IOPS
>> potential is over double that of the 1068E cards, 320M vs 140M, and
>> you'll have support for drives larger than 2TB. It also supports SATA3
>> link speed whereas the 1068E chips only support SATA2. It has a PCIe x8
>> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
>> interface for only 4GB/s.
>>
>> In short, it's has quite a bit more capability than the 1068E based cards.
>
>
> Yeah, i was impressed by the claimed specs. I bet if i knew how much it sells
> for, i'd be shocked. I did a little searching but didn't have much luck.

That's because the 9210* is...
"*Only available to OEMs through LSI direct sales."
IBM sells the 9210-8i as the ServeRAID M1015, available at Newegg for
$320 (way over priced as with all things Big Blue). IBM adds optional
RAID5/50 fakeraid to the BIOS with an additional license key payment.
The retail version of the 9210-8i is the LSI 9240-8i, available at
Newegg for $265.

However, the specs on the 9211-8i are the same, and the connector layout
is better--front vs top. I recommend it over of the 9240-8i. And it's
a little cheaper to boot, $240 vs $265, at Newegg:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118 112

9211-8i full specs:
http://www.lsi.com/products/storagecomponents/Pages/LSISAS92 11-8i.aspx

Unless you need to use drives larger than 2TB the $155 Intel 1068E card
is a far better buy at almost $100 less. If you need to connect more
than 8 drives, get a 9211-4i and one of these Intel expanders for 20
drive ports:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816117 207

This combo with run you ~$450, or $22.50/port for 20 ports. The 9211-8i
runs $30/port for 8 ports.

> p.s. Sory for the duplicate Stan, I couldn't figure out how to disable html on
> my android mail client, and linux-raid bounced it.

No need to apologize. Sh*t happens.

--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 28.09.2011 08:03:06 von Mikael Abrahamsson

On Tue, 27 Sep 2011, Thomas Fjellstrom wrote:

> Yeah, i was impressed by the claimed specs. I bet if i knew how much it
> sells for, i'd be shocked. I did a little searching but didn't have much
> luck.

I've been able to procure several 1068E based cards for around USD50 on
ebay. The IBM BR10i is one example. You might have to live without a
proper mounting bracket, but it's a proper 1068E card as far as I can tell
(has worked well in my testing).

--
Mikael Abrahamsson email: swmike@swm.pp.se
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 28.09.2011 08:52:10 von Thomas Fjellstrom

On September 27, 2011, Stan Hoeppner wrote:
> On 9/27/2011 4:04 PM, Thomas Fjellstrom wrote:
> >> Stan Hoeppner wrote:
> >>
> >> The 9210-8i is a newer generation card using the SAS2008 chip. IOPS
> >> potential is over double that of the 1068E cards, 320M vs 140M, and
> >> you'll have support for drives larger than 2TB. It also supports SATA3
> >> link speed whereas the 1068E chips only support SATA2. It has a PCIe x8
> >> 2.0 interface for 8GB/s b/w, whereas the 1068E has a PCIe x8 1.0
> >> interface for only 4GB/s.
> >>
> >> In short, it's has quite a bit more capability than the 1068E based
> >> cards.
> >
> > Yeah, i was impressed by the claimed specs. I bet if i knew how much it
> > sells for, i'd be shocked. I did a little searching but didn't have much
> > luck.
>
> That's because the 9210* is...
> "*Only available to OEMs through LSI direct sales."
> IBM sells the 9210-8i as the ServeRAID M1015, available at Newegg for
> $320 (way over priced as with all things Big Blue). IBM adds optional
> RAID5/50 fakeraid to the BIOS with an additional license key payment.
> The retail version of the 9210-8i is the LSI 9240-8i, available at
> Newegg for $265.
>
> However, the specs on the 9211-8i are the same, and the connector layout
> is better--front vs top. I recommend it over of the 9240-8i. And it's
> a little cheaper to boot, $240 vs $265, at Newegg:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816118 112
>
> 9211-8i full specs:
> http://www.lsi.com/products/storagecomponents/Pages/LSISAS92 11-8i.aspx
>
> Unless you need to use drives larger than 2TB the $155 Intel 1068E card
> is a far better buy at almost $100 less. If you need to connect more
> than 8 drives, get a 9211-4i and one of these Intel expanders for 20
> drive ports:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117 207
>
> This combo with run you ~$450, or $22.50/port for 20 ports. The 9211-8i
> runs $30/port for 8 ports.

Very interesting and useful information :) I'm actually trading my time for
the card, a very nice fellow on the local lug list offered after (I assume) he
saw my earlier thread here.

I've been thinking about getting an expander. Won't happen for a bit/while.

> > p.s. Sory for the duplicate Stan, I couldn't figure out how to disable
> > html on my android mail client, and linux-raid bounced it.
>
> No need to apologize. Sh*t happens.

Thanks for the information so far, its been quite helpful.

--
Thomas Fjellstrom
thomas@fjellstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: RAID6 issues

am 28.09.2011 08:53:20 von Thomas Fjellstrom

On September 28, 2011, Mikael Abrahamsson wrote:
> On Tue, 27 Sep 2011, Thomas Fjellstrom wrote:
> > Yeah, i was impressed by the claimed specs. I bet if i knew how much it
> > sells for, i'd be shocked. I did a little searching but didn't have much
> > luck.
>
> I've been able to procure several 1068E based cards for around USD50 on
> ebay. The IBM BR10i is one example. You might have to live without a
> proper mounting bracket, but it's a proper 1068E card as far as I can tell
> (has worked well in my testing).

Hm, I'll have to keep an eye open, when I decide to re-build my backup array.
Thanks for the heads up :)

--
Thomas Fjellstrom
thomas@fjellstrom.ca
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html