RAID5 -> RAID6 conversion, please help
RAID5 -> RAID6 conversion, please help
am 11.05.2011 01:15:11 von Peter Kovari
Dear all,
I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6 a=
rray.
This was my existing array:
------------------------------------------------------------ -----------=
-----
-----------------------------------
/dev/md0:
=A0 Version : 0.90
=A0 Raid Level : raid5
=A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
=A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
=A0 Raid Devices : 5
=A0Total Devices : 5
=A0 Persistence : Superblock is persistent
=A0 State : clean
=A0Active Devices : 5
=A0Working Devices : 5
=A0 Layout : left-symmetric
=A0 Chunk Size : 512K
=A0 Events : 0.156
=A0 Number Major Minor RaidDevice State
0 8 17 =
=A0 0 =A0 active sync /dev/sdb1
1 8 81 =
=A0 1 =A0active sync /dev/sdf1
2 8 33 =
=A0 2 =A0 active sync /dev/sdc1
3 8 97 =
=A0 3 =A0 active sync /dev/sdg1
4 8 65 =
=A0 4 =A0 active sync /dev/sde1
------------------------------------------------------------ -----------=
-----
-------------------------------
I did the conversion according to "howtos", so:
$ mdadm -add /dev/md0 /dev/sdd1
then:
$ mdadm --grow /dev/md0 --level=3D6 --raid-devices=3D6
--backup-file=3D/mnt/mdadm-raid5-to-raid6.backup
Instead of starting the reshape process, mdadm responded this:
mdadm: /dev/md0: changed level to 6 (or something like that, i dont rem=
ember
the exact words, but it was about changing the level).
mdadm: /dev/md0: Cannot get array details from sysfs
And the array became this:
------------------------------------------------------------ -----------=
-----
-------------------------------
/dev/md0:
=A0 Raid Level : raid6
=A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
=A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
=A0 Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
State : clean, degraded
Active Devices : 5
Working Devices : 6
=46ailed Devices : 0
Spare Devices : 1
Events : 0.170
=A0 Number Major Minor RaidDevice State
0 8 17 =
=A0 0 =A0 active sync /dev/sdb1
1 8 81 =
=A0 1 =A0 active sync /dev/sdf1
2 8 33 =
=A0 2 =A0 active sync /dev/sdc1
3 8=A0 =A097 =
=A0 3 =A0 active sync /dev/sdg1
4 8 65 =
=A0 4 =A0 active sync /dev/sde1
5 0 =A0 0 =A0=
5 =A0 removed
6 8 49 =
=A0 - =A0 spare /dev/sdd1
------------------------------------------------------------ -----------=
-----
-------------------------------
At this point I realized that /dev/sdd previously was a member of anoth=
er
raid array in an other machine, and however I re-partitioned the disk, =
I
didn't remove the old superblock. So maybe this was the reason for the =
mdadm
error. Since the state of /dev/sdd1 was spare, i removed it:
$ mdadm -remove /dev/md0 /dev/sdd1
then cleared remaining superblock
$ mdadm --zero-superblock /dev/sdd1
then added it back to the array:
mdadm --add /dev/md0 /dev/sdd1
and started the grow process again:
$ mdadm --grow /dev/md0 --level=3D6 --raid-devices=3D6
--backup-file=3D/mnt/mdadm-raid5-to-raid6.backup
mdadm: /dev/md0: no change requested
Mdadm stated no change, however, it started to rebuild the array. It's
currently rebuilding:
------------------------------------------------------------ -----------=
-----
-------------------------------
/dev/md0:
=A0 Version : 0.90
=A0 Raid Level : raid6
=A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
=A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
=A0 Raid Devices : 6
=A0Total Devices : 6
=A0 Persistence : Superblock is persistent
=A0 State : clean, degraded, recovering
=A0Active Devices : 5
=A0 Working Devices : 6
=A0Failed Devices : 0
=A0 Spare Devices : 1
=A0 Layout : left-symmetric-6
=A0Chunk Size : 512K
=A0Rebuild Status : 2% complete
=A0 Events : 0.186
=A0 Number Major Minor RaidDevice State
0 8 17 =
=A0 0 =A0 active sync /dev/sdb1
1 8 81 =
=A0 1 =A0 active sync /dev/sdf1
2 8 33 =
=A0 2 =A0 active sync /dev/sdc1
3 8 97 =
=A0 3=A0 active sync /dev/sdg1
4 8 65 =
=A0 4 =A0 active sync /dev/sde1
6 8 49 =
=A0 5 =A0 spare rebuilding /dev/sdd1
------------------------------------------------------------ -----------=
-----
-------------------------------
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [r=
aid4]
[raid10]
md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
=A0 5860548608 blocks level 6, 512k chunk, algorithm 18 [6/=
5] [UUUUU_]
=A0 [>....................]=A0 recovery = 2.3% (344382=
72/1465137152)
finish=3D1074.5min speed=3D22190K/sec
unused devices:
------------------------------------------------------------ -----------=
-----
-------------------------------
Mdadm didn't create the backup file, and the process seems too fast to =
me
for a raid5->raid6 conversion.
Please help me to understand what's happening now.
Cheers,
Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 01:31:55 von NeilBrown
On Wed, 11 May 2011 01:15:11 +0200 "Peter Kovari"
>
wrote:
> Dear all,
>=20
> I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6=
array.
> This was my existing array:
> ------------------------------------------------------------ ---------=
-------
> -----------------------------------
> /dev/md0:
> =A0 Version : 0.90
> =A0 Raid Level : raid5
> =A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> =A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> =A0 Raid Devices : 5
> =A0Total Devices : 5
> =A0 Persistence : Superblock is persistent
> =A0 State : clean
> =A0Active Devices : 5
> =A0Working Devices : 5
> =A0 Layout : left-symmetric
> =A0 Chunk Size : 512K
> =A0 Events : 0.156
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 17 =A0=
0 =A0 active sync /dev/sdb1
> 1 8 81 =A0=
1 =A0active sync /dev/sdf1
> 2 8 33 =A0=
2 =A0 active sync /dev/sdc1
> 3 8 97 =A0=
3 =A0 active sync /dev/sdg1
> 4 8 65 =A0=
4 =A0 active sync /dev/sde1
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
>=20
> I did the conversion according to "howtos", so:
> $ mdadm -add /dev/md0 /dev/sdd1
> then:
> $ mdadm --grow /dev/md0 --level=3D6 --raid-devices=3D6
> --backup-file=3D/mnt/mdadm-raid5-to-raid6.backup
>=20
> Instead of starting the reshape process, mdadm responded this:
> mdadm: /dev/md0: changed level to 6 (or something like that, i dont r=
emember
> the exact words, but it was about changing the level).
> mdadm: /dev/md0: Cannot get array details from sysfs
>=20
> And the array became this:
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
> /dev/md0:
> =A0 Raid Level : raid6
> =A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> =A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> =A0 Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
> State : clean, degraded
> Active Devices : 5
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 1
> Events : 0.170
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 17 =A0=
0 =A0 active sync /dev/sdb1
> 1 8 81 =A0=
1 =A0 active sync /dev/sdf1
> 2 8 33 =A0=
2 =A0 active sync /dev/sdc1
> 3 8=A0 =A097 =A0=
3 =A0 active sync /dev/sdg1
> 4 8 65 =A0=
4 =A0 active sync /dev/sde1
> 5 0 =A0 0 =
=A0 5 =A0 removed
> 6 8 49 =A0=
- =A0 spare /dev/sdd1
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
>=20
> At this point I realized that /dev/sdd previously was a member of ano=
ther
> raid array in an other machine, and however I re-partitioned the disk=
, I
> didn't remove the old superblock. So maybe this was the reason for th=
e mdadm
> error. Since the state of /dev/sdd1 was spare, i removed it:
>=20
> $ mdadm -remove /dev/md0 /dev/sdd1
>=20
> then cleared remaining superblock
> $ mdadm --zero-superblock /dev/sdd1
>=20
> then added it back to the array:
> mdadm --add /dev/md0 /dev/sdd1
>=20
> and started the grow process again:
> $ mdadm --grow /dev/md0 --level=3D6 --raid-devices=3D6
> --backup-file=3D/mnt/mdadm-raid5-to-raid6.backup
> mdadm: /dev/md0: no change requested
>=20
> Mdadm stated no change, however, it started to rebuild the array. It'=
s
> currently rebuilding:
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
> /dev/md0:
> =A0 Version : 0.90
> =A0 Raid Level : raid6
> =A0 Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> =A0 Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> =A0 Raid Devices : 6
> =A0Total Devices : 6
> =A0 Persistence : Superblock is persistent
> =A0 State : clean, degraded, recovering
> =A0Active Devices : 5
> =A0 Working Devices : 6
> =A0Failed Devices : 0
> =A0 Spare Devices : 1
> =A0 Layout : left-symmetric-6
> =A0Chunk Size : 512K
> =A0Rebuild Status : 2% complete
> =A0 Events : 0.186
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 17 =A0=
0 =A0 active sync /dev/sdb1
> 1 8 81 =A0=
1 =A0 active sync /dev/sdf1
> 2 8 33 =A0=
2 =A0 active sync /dev/sdc1
> 3 8 97 =A0=
3=A0 active sync /dev/sdg1
> 4 8 65 =A0=
4 =A0 active sync /dev/sde1
> 6 8 49 =A0=
5 =A0 spare rebuilding /dev/sdd1
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] =
[raid4]
> [raid10]
> md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
> =A0 5860548608 blocks level 6, 512k chunk, algorithm 18 [=
6/5] [UUUUU_]
> =A0 [>....................]=A0 recovery = 2.3% (3443=
8272/1465137152)
> finish=3D1074.5min speed=3D22190K/sec
>=20
> unused devices:
> ------------------------------------------------------------ ---------=
-------
> -------------------------------
>=20
> Mdadm didn't create the backup file, and the process seems too fast t=
o me
> for a raid5->raid6 conversion.
> Please help me to understand what's happening now.
You have a RAID6 array in a non-standard config where there Q block (th=
e
second parity block) is always on the last device rather than rotated a=
round
the various devices.
The array is simply recovering that 6th drive to the spare.
When it finished you will have a perfectly functional RAID6 array with =
full
redundancy. It might perform slightly differently to a standard layout=
-
I've never performed any measurements to see how differently.
If you want to (after the recovery completes) you could convert to a re=
gular
RAID6 with
mdadm -G /dev/md0 --layout=3Dnormalise --backup=3D/some/file/on/a/d=
ifferent/device
but you probably don't have to.
The old meta on sdd will not have been a problem.
What version of mdadm did you use to try to start the reshape?
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 01:39:27 von Steven Haigh
On 11/05/2011 9:31 AM, NeilBrown wrote:
> When it finished you will have a perfectly functional RAID6 array with full
> redundancy. It might perform slightly differently to a standard layout -
> I've never performed any measurements to see how differently.
>
> If you want to (after the recovery completes) you could convert to a regular
> RAID6 with
> mdadm -G /dev/md0 --layout=normalise --backup=/some/file/on/a/different/device
>
> but you probably don't have to.
>
This makes me wonder. How can one tell if the layout is 'normal' or with
Q blocks on a single device?
I recently changed my array from RAID5->6. Mine created a backup file
and took just under 40 hours for 4 x 1Tb devices. I assume that this
means that data was reorganised to the standard RAID6 style? The
conversion was done at about 4-6Mb/sec.
Is there any effect on doing a --layout=normalise if the above happened?
--
Steven Haigh
Email: netwiz@crc.id.au
Web: http://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: RAID5 -> RAID6 conversion, please help
am 11.05.2011 02:08:00 von Peter Kovari
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of NeilBrown
> Sent: Wednesday, May 11, 2011 1:32 AM
> To: Peter Kovari
> Cc: linux-raid@vger.kernel.org
> Subject: Re: RAID5 -> RAID6 conversion, please help
> You have a RAID6 array in a non-standard config where there Q block (the
> second parity block) is always on the last device rather than rotated
around
> the various devices.
> The array is simply recovering that 6th drive to the spare.
> When it finished you will have a perfectly functional RAID6 array with
full
> redundancy. It might perform slightly differently to a standard layout -
> I've never performed any measurements to see how differently.
> If you want to (after the recovery completes) you could convert to a
regular
> RAID6 with
> mdadm -G /dev/md0 --layout=normalise
--backup=/some/file/on/a/different/device
> but you probably don't have to.
Thank you Neil, this explains everything.
I suppose the layout difference mostly affects - if affects - write
performance. Since this is a media server, with mostly read operations, I
probably will leave it as it is.
> The old meta on sdd will not have been a problem.
> What version of mdadm did you use to try to start the reshape?
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.2 LTS
Release: 10.04
Codename: lucid
$ mdadm --version
mdadm - v3.1.4 - 31st August 2010
$ uname -a
Linux FileStation 2.6.34-020634-generic #020634 SMP Mon May 17 19:27:49 UTC
2010 x86_64 GNU/Linux
Cheers,
Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 02:21:16 von NeilBrown
On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh wrote:
> On 11/05/2011 9:31 AM, NeilBrown wrote:
> > When it finished you will have a perfectly functional RAID6 array with full
> > redundancy. It might perform slightly differently to a standard layout -
> > I've never performed any measurements to see how differently.
> >
> > If you want to (after the recovery completes) you could convert to a regular
> > RAID6 with
> > mdadm -G /dev/md0 --layout=normalise --backup=/some/file/on/a/different/device
> >
> > but you probably don't have to.
> >
>
> This makes me wonder. How can one tell if the layout is 'normal' or with
> Q blocks on a single device?
>
> I recently changed my array from RAID5->6. Mine created a backup file
> and took just under 40 hours for 4 x 1Tb devices. I assume that this
> means that data was reorganised to the standard RAID6 style? The
> conversion was done at about 4-6Mb/sec.
Probably.
What is the 'layout' reported by "mdadm -D"?
If it ends -6, then it is a RAID5 layout with the Q block all on the last
disk.
If not, then it is already normalised.
>
> Is there any effect on doing a --layout=normalise if the above happened?
>
Probably not.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 02:38:11 von Dylan Distasio
Hi Neil-
Just out of curiosity, how does mdadm decide which layout to use on a
reshape from RAID5->6.=A0 I converted two of my RAID5s on different
boxes running the same OS awhile ago, and was not aware of the
different possibilities. When I check now, one of them was conver=
ted
with the Q block all on the last disk, and the other appears
normalized.=A0 I'm relatively confident I ran exactly the same command
on both to reshape them within a short time of one another.
Here are the current details of the two arrays:
dylan@terrordome:~$ sudo mdadm -D /dev/md0
/dev/md0:
=A0 Version : 0.90
=A0 Creation Time : Tue Mar=A0 3 23:41:24 2009
Raid Level : raid6
Array Size : 5860559616 (5589.07 GiB 6001.21 GB)
=A0 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 8
=A0 Total Devices : 8
Preferred Minor : 0
=A0 Persistence : Superblock is persistent
=A0 Intent Bitmap : Internal
=A0 Update Time : Tue May 10 20:06:42 2011
=A0 State : active
=A0Active Devices : 8
Working Devices : 8
=A0Failed Devices : 0
=A0 Spare Devices : 0
Layout : left-symmetric-6
Chunk Size : 64K
UUID : 4891e7c1:5d7ec244:a9bd8edb:
d35467d0 (local to host terrordome)
Events : 0.743956
=A0 Number Major Minor RaidDevice State
0 8 33 =
=A0 0 =A0 active sync /dev/sdc1
1 8 49 =
=A0 1 =A0 active sync /dev/sdd1
2 8 97 =
=A0 2 =A0 active sync /dev/sdg1
3 8 =A0 113 =
=A0 3 =A0 active sync /dev/sdh1
4 8 17 =
=A0 4 =A0 active sync /dev/sdb1
5 8 65 =
=A0 5 =A0 active sync /dev/sde1
6 8 =A0 241 =
=A0 6 =A0 active sync /dev/sdp1
7 =A0 65 17 =
=A0 7 =A0 active sync /dev/sdr1
dylan@terrordome:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: =A0 Ubuntu 10.04.1 LTS
Release: =A0 10.04
Codename: lucid
dylan@rapture:~$ sudo mdadm -D /dev/md0
/dev/md0:
=A0 Version : 0.90
=A0 Creation Time : Sat Jun=A0 7 02:54:05 2008
Raid Level : raid6
Array Size : 2194342080 (2092.69 GiB 2247.01 GB)
=A0 Used Dev Size : 731447360 (697.56 GiB 749.00 GB)
Raid Devices : 5
=A0 Total Devices : 5
Preferred Minor : 0
=A0 Persistence : Superblock is persistent
=A0 Update Time : Tue May 10 20:19:13 2011
=A0 State : clean
=A0Active Devices : 5
Working Devices : 5
=A0Failed Devices : 0
=A0 Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 83b4a7df:1d05f5fd:e368bf24:bd0fce=
41
Events : 0.723556
=A0 Number Major Minor RaidDevice State
0 8 18 =
=A0 0 =A0 active sync /dev/sdb2
1 8 34 =
=A0 1 =A0 active sync /dev/sdc2
2 8 =A0 2 =A0=
2 =A0 active sync /dev/sda2
3 8 66 =
=A0 3 =A0 active sync /dev/sde2
4 8 82 =
=A0 4 =A0 active sync /dev/sdf2
dylan@rapture:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: =A0 Ubuntu 10.04.1 LTS
Release: =A0 10.04
Codename: lucid
On Tue, May 10, 2011 at 8:21 PM, NeilBrown wrote:
>
> On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh wr=
ote:
>
> > On 11/05/2011 9:31 AM, NeilBrown wrote:
> > > When it finished you will have a perfectly functional RAID6 array=
with full
> > > redundancy. =A0It might perform slightly differently to a standar=
d layout -
> > > I've never performed any measurements to see how differently.
> > >
> > > If you want to (after the recovery completes) you could convert t=
o a regular
> > > RAID6 with
> > > =A0 =A0mdadm -G /dev/md0 --layout=3Dnormalise =A0 --backup=3D/som=
e/file/on/a/different/device
> > >
> > > but you probably don't have to.
> > >
> >
> > This makes me wonder. How can one tell if the layout is 'normal' or=
with
> > Q blocks on a single device?
> >
> > I recently changed my array from RAID5->6. Mine created a backup fi=
le
> > and took just under 40 hours for 4 x 1Tb devices. I assume that thi=
s
> > means that data was reorganised to the standard RAID6 style? The
> > conversion was done at about 4-6Mb/sec.
>
> Probably.
>
> What is the 'layout' reported by "mdadm -D"?
> If it ends -6, then it is a RAID5 layout with the Q block all on the =
last
> disk.
> If not, then it is already normalised.
>
> >
> > Is there any effect on doing a --layout=3Dnormalise if the above ha=
ppened?
> >
> Probably not.
>
> NeilBrown
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 02:47:30 von NeilBrown
On Tue, 10 May 2011 20:38:11 -0400 Dylan Distasio =
wrote:
> Hi Neil-
>=20
> Just out of curiosity, how does mdadm decide which layout to use on a
> reshape from RAID5->6.=A0 I converted two of my RAID5s on different
> boxes running the same OS awhile ago, and was not aware of the
> different possibilities. When I check now, one of them was conv=
erted
> with the Q block all on the last disk, and the other appears
> normalized.=A0 I'm relatively confident I ran exactly the same comman=
d
> on both to reshape them within a short time of one another.
mdadm first converts the RAID5 to RAID6 in an instant atomic operation =
which
results in the "-6" layout. It then starts a restriping process which
converts the layout.
If you end up with a -6 layout then something when wrong starting the
restriping process.
Maybe you used different version of mdadm? There have probably been bu=
gs in
some versions..
NeilBrown
>=20
> Here are the current details of the two arrays:
>=20
> dylan@terrordome:~$ sudo mdadm -D /dev/md0
> /dev/md0:
> =A0 Version : 0.90
> =A0 Creation Time : Tue Mar=A0 3 23:41:24 2009
> Raid Level : raid6
> Array Size : 5860559616 (5589.07 GiB 6001.21 GB)
> =A0 Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
> Raid Devices : 8
> =A0 Total Devices : 8
> Preferred Minor : 0
> =A0 Persistence : Superblock is persistent
>=20
> =A0 Intent Bitmap : Internal
>=20
> =A0 Update Time : Tue May 10 20:06:42 2011
> =A0 State : active
> =A0Active Devices : 8
> Working Devices : 8
> =A0Failed Devices : 0
> =A0 Spare Devices : 0
>=20
> Layout : left-symmetric-6
> Chunk Size : 64K
>=20
> UUID : 4891e7c1:5d7ec244:a9bd8edb:
> d35467d0 (local to host terrordome)
> Events : 0.743956
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 33 =A0=
0 =A0 active sync /dev/sdc1
> 1 8 49 =A0=
1 =A0 active sync /dev/sdd1
> 2 8 97 =A0=
2 =A0 active sync /dev/sdg1
> 3 8 =A0 113 =
=A0 3 =A0 active sync /dev/sdh1
> 4 8 17 =A0=
4 =A0 active sync /dev/sdb1
> 5 8 65 =A0=
5 =A0 active sync /dev/sde1
> 6 8 =A0 241 =
=A0 6 =A0 active sync /dev/sdp1
> 7 =A0 65 17 =
=A0 7 =A0 active sync /dev/sdr1
> dylan@terrordome:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: =A0 Ubuntu 10.04.1 LTS
> Release: =A0 10.04
> Codename: lucid
>=20
>=20
> dylan@rapture:~$ sudo mdadm -D /dev/md0
>=20
> /dev/md0:
> =A0 Version : 0.90
> =A0 Creation Time : Sat Jun=A0 7 02:54:05 2008
> Raid Level : raid6
> Array Size : 2194342080 (2092.69 GiB 2247.01 GB)
> =A0 Used Dev Size : 731447360 (697.56 GiB 749.00 GB)
> Raid Devices : 5
> =A0 Total Devices : 5
> Preferred Minor : 0
> =A0 Persistence : Superblock is persistent
>=20
> =A0 Update Time : Tue May 10 20:19:13 2011
> =A0 State : clean
> =A0Active Devices : 5
> Working Devices : 5
> =A0Failed Devices : 0
> =A0 Spare Devices : 0
>=20
> Layout : left-symmetric
> Chunk Size : 64K
>=20
> UUID : 83b4a7df:1d05f5fd:e368bf24:bd0f=
ce41
> Events : 0.723556
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 18 =A0=
0 =A0 active sync /dev/sdb2
> 1 8 34 =A0=
1 =A0 active sync /dev/sdc2
> 2 8 =A0 2 =
=A0 2 =A0 active sync /dev/sda2
> 3 8 66 =A0=
3 =A0 active sync /dev/sde2
> 4 8 82 =A0=
4 =A0 active sync /dev/sdf2
>=20
> dylan@rapture:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description: =A0 Ubuntu 10.04.1 LTS
> Release: =A0 10.04
> Codename: lucid
>=20
> On Tue, May 10, 2011 at 8:21 PM, NeilBrown wrote:
> >
> > On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh =
wrote:
> >
> > > On 11/05/2011 9:31 AM, NeilBrown wrote:
> > > > When it finished you will have a perfectly functional RAID6 arr=
ay with full
> > > > redundancy. =A0It might perform slightly differently to a stand=
ard layout -
> > > > I've never performed any measurements to see how differently.
> > > >
> > > > If you want to (after the recovery completes) you could convert=
to a regular
> > > > RAID6 with
> > > > =A0 =A0mdadm -G /dev/md0 --layout=3Dnormalise =A0 --backup=3D/s=
ome/file/on/a/different/device
> > > >
> > > > but you probably don't have to.
> > > >
> > >
> > > This makes me wonder. How can one tell if the layout is 'normal' =
or with
> > > Q blocks on a single device?
> > >
> > > I recently changed my array from RAID5->6. Mine created a backup =
file
> > > and took just under 40 hours for 4 x 1Tb devices. I assume that t=
his
> > > means that data was reorganised to the standard RAID6 style? The
> > > conversion was done at about 4-6Mb/sec.
> >
> > Probably.
> >
> > What is the 'layout' reported by "mdadm -D"?
> > If it ends -6, then it is a RAID5 layout with the Q block all on th=
e last
> > disk.
> > If not, then it is already normalised.
> >
> > >
> > > Is there any effect on doing a --layout=3Dnormalise if the above =
happened?
> > >
> > Probably not.
> >
> > NeilBrown
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at =A0http://vger.kernel.org/majordomo-info.htm=
l
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 03:04:29 von Dylan Distasio
It's possible I did use different versions but I thought I had
upgraded both of them right before the reshapes. Sorry if this is an
elementary question, but does writing the 2nd parity block always to
the last drive instead of rotating it increase the odds of a total
loss of the array since the one specific drive always has the 2nd
parity block?
If so, do you think normalizing would be worth the risk of something
going wrong with that operation? I'm just trying to get a feel for
how much of a difference this makes.
> mdadm first converts the RAID5 to RAID6 in an instant atomic operatio=
n which
> results in the "-6" layout. =A0It then starts a restriping process wh=
ich
> converts the layout.
>
> If you end up with a -6 layout then something when wrong starting the
> restriping process.
>
> Maybe you used different version of mdadm? =A0There have probably bee=
n bugs in
> some versions..
>
> NeilBrown
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID5 -> RAID6 conversion, please help
am 11.05.2011 05:29:35 von NeilBrown
On Tue, 10 May 2011 21:04:29 -0400 Dylan Distasio =
wrote:
> It's possible I did use different versions but I thought I had
> upgraded both of them right before the reshapes. Sorry if this is an
> elementary question, but does writing the 2nd parity block always to
> the last drive instead of rotating it increase the odds of a total
> loss of the array since the one specific drive always has the 2nd
> parity block?
No. The only possibly impact is a performance impact, and even that wo=
uld be
hard to quantify.
The reason that parity is rotated is to avoid a 'hot disk'. Every upda=
te has
to write the parity block and if they are all on one disk then every wr=
ite
will generate a write to that one disk.
Because of the way md implements RAID6, every write involves either a r=
ead or
a write to every device, so there is no real saving in rotating parity.
I think (but am open to being corrected) that rotating parity is only
important for RAID6 if the code implements 'subtraction' as well as
'addition' for the Q syndrome (which md doesn't) and if you have at lea=
st 5
drives, and you probably wouldn't notice until you get to 7 or more dri=
ves.
.. so it might make sense to make mdadm default to converting to the=
-6
layout...
You can request it with "--layout=3Dpreserve".
>=20
> If so, do you think normalizing would be worth the risk of something
> going wrong with that operation? I'm just trying to get a feel for
> how much of a difference this makes.
Not worth the risk.
NeilBrown
>=20
>=20
> > mdadm first converts the RAID5 to RAID6 in an instant atomic operat=
ion which
> > results in the "-6" layout. =A0It then starts a restriping process =
which
> > converts the layout.
> >
> > If you end up with a -6 layout then something when wrong starting t=
he
> > restriping process.
> >
> > Maybe you used different version of mdadm? =A0There have probably b=
een bugs in
> > some versions..
> >
> > NeilBrown
> >
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html