Growing 6 HDD RAID5 to 7 HDD RAID5

Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 18:56:05 von mathias.buren

Hi mailing list,

First, thanks for this great software!

I have a RAID5 setup with on 6x 2TB HDD:

/dev/md0:
Version : 1.2
Creation Time : Tue Oct 19 08:58:41 2010
Raid Level : raid5
Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Apr 12 17:50:25 2011
State : active
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : ion:0 (local to host ion)
UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
Events : 3035979

Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
1 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
5 8 65 4 active sync /dev/sde1
6 8 97 5 active sync /dev/sdg1

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[0] sdg1[6] sde1[5] sdc1[3] sdd1[4] sdb1[1]
9751756800 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 1/15 pages [4KB], 65536KB chunk

unused devices:


I'm approaching over 6.5TB of data, and with an array this large I'd
like to migrate to RAID6 for a bit more safety. I'm just checking if I
understand this correctly, this is how to do it:

* Add a HDD to the array as a hot spare:
mdadm --manage /dev/md0 --add /dev/sdh1

* Migrate the array to RAID6:
mdadm --grow /dev/md0 --raid-devices 7 --level 6

Cheers,
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 19:14:01 von Roman Mamedov

--Sig_/uK_39il30GlX2G6T.fvMYWv
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 12 Apr 2011 17:56:05 +0100
Mathias Burén wrote:

> I'm approaching over 6.5TB of data, and with an array this large I'd
> like to migrate to RAID6 for a bit more safety.

That's a great decision (and I suppose you made a typo in the subject).
RAID5 is downright dangerous at that disk count, and with disks of that siz=
e.

> I'm just checking if I
> understand this correctly, this is how to do it:
>=20
> * Add a HDD to the array as a hot spare:
> mdadm --manage /dev/md0 --add /dev/sdh1
>=20
> * Migrate the array to RAID6:
> mdadm --grow /dev/md0 --raid-devices 7 --level 6

Looks correct to me...

The first command can be just "mdadm --add /dev/md0 /dev/sdh1".

If you'd rather avoid a reshape at this point, you can add
"--layout=3Dpreserve" to the second line. That way you will have just a reb=
uild
of the new drive, instead of a full reshape.

You will also need to "--grow --bitmap=3Dnone" first (you can re-add the bi=
tmap
later).

--=20
With respect,
Roman

--Sig_/uK_39il30GlX2G6T.fvMYWv
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk2kiFkACgkQTLKSvz+PZwjLGgCfXxmTsFm+jGZ1D/5XmysM q4rN
ifkAmgK8Iue40ikcLz+UfFX4ve+Qy9p0
=E1v7
-----END PGP SIGNATURE-----

--Sig_/uK_39il30GlX2G6T.fvMYWv--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 19:21:13 von mathias.buren

On 12 April 2011 18:14, Roman Mamedov wrote:
> On Tue, 12 Apr 2011 17:56:05 +0100
> Mathias Burén wrote:
>
>> I'm approaching over 6.5TB of data, and with an array this large I'd
>> like to migrate to RAID6 for a bit more safety.
>
> That's a great decision (and I suppose you made a typo in the subject=
).
> RAID5 is downright dangerous at that disk count, and with disks of th=
at size.
>
>> I'm just checking if I
>> understand this correctly, this is how to do it:
>>
>> * Add a HDD to the array as a hot spare:
>> mdadm --manage /dev/md0 --add /dev/sdh1
>>
>> * Migrate the array to RAID6:
>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>
> Looks correct to me...
>
> The first command can be just "mdadm --add /dev/md0 /dev/sdh1".
>
> If you'd rather avoid a reshape at this point, you can add
> "--layout=3Dpreserve" to the second line. That way you will have just=
a rebuild
> of the new drive, instead of a full reshape.
>
> You will also need to "--grow --bitmap=3Dnone" first (you can re-add =
the bitmap
> later).
>
> --
> With respect,
> Roman
>

Hi,

Yep I mean RAID6, stupid subject line. If I use --layout=3Dpreserve ,
what impact will that have? Will the array have redundancy during the
rebuild of the new drive?
If I preserve the layout, what is the final result of the array
compared to not preserving it?

Cheers,
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 20:22:38 von Roman Mamedov

--Sig_/ssZLpGf_7=NkbA6H0UmdzAJ
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Tue, 12 Apr 2011 18:21:13 +0100
Mathias Burén wrote:

> If I use --layout=3Dpreserve , what impact will that have?
> If I preserve the layout, what is the final result of the array
> compared to not preserving it?

Neil wrote about this on his blog:
"It is a very similar process that can now be used to convert a RAID5 to a
RAID6. We first change the RAID5 to RAID6 with a non-standard layout that h=
as
the parity blocks distributed as normal, but the Q blocks all on the last
device (a new device). So this is RAID6 using the RAID6 driver, but with a
non-RAID6 layout. So we "simply" change the layout and the job is done."
http://neil.brown.name/blog/20090817000931

Admittedly it is not completely clear to me what are the long-term downside=
s of
this layout. As I understand it does fully provide the RAID6-level redundan=
cy.
Perhaps just the performance will suffer a bit? Maybe someone can explain t=
his
more.

If anything, I think it is safe to use this layout for a while, e.g. in case
you don't want to rebuild 'right now'. You can always change the layout to =
the
traditional one later, by issuing "--grow --layout=3Dnormalise". Or perhaps=
if
you plan to add another disk soon, you can normalise it on that occasion, a=
nd
still gain the benefit of only one full reshape.

> Will the array have redundancy during the rebuild of the new drive?

If you choose --layout=3Dpreserve, your array immediately becomes a RAID6 w=
ith
one rebuilding drive. So this is the kind of redundancy you will have during
that rebuild - tolerance of up to one more (among the "old" drives) failure,
in other words, identical to what you currently have with RAID5.

--=20
With respect,
Roman

--Sig_/ssZLpGf_7=NkbA6H0UmdzAJ
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk2kmG4ACgkQTLKSvz+PZwh+1ACfZ5AlZg/ZQpJ48nC3llGq mPLM
J+4Anj2ZdAkR4vXdKG59z9Z2wtm4aRPw
=X1gx
-----END PGP SIGNATURE-----

--Sig_/ssZLpGf_7=NkbA6H0UmdzAJ--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 23:15:54 von NeilBrown

--Sig_/bOzDzmoY7xHxe0zBmzkWxEM
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

On Wed, 13 Apr 2011 00:22:38 +0600 Roman Mamedov wrote:

> On Tue, 12 Apr 2011 18:21:13 +0100
> Mathias Bur=E9n wrote:
>=20
> > If I use --layout=3Dpreserve , what impact will that have?
> > If I preserve the layout, what is the final result of the array
> > compared to not preserving it?
>=20
> Neil wrote about this on his blog:
> "It is a very similar process that can now be used to convert a RAID5 to a
> RAID6. We first change the RAID5 to RAID6 with a non-standard layout that=
has
> the parity blocks distributed as normal, but the Q blocks all on the last
> device (a new device). So this is RAID6 using the RAID6 driver, but with a
> non-RAID6 layout. So we "simply" change the layout and the job is done."
> http://neil.brown.name/blog/20090817000931
>=20
> Admittedly it is not completely clear to me what are the long-term downsi=
des of
> this layout. As I understand it does fully provide the RAID6-level redund=
ancy.
> Perhaps just the performance will suffer a bit? Maybe someone can explain=
this
> more.

If you specify --layout=3Dpreserve, then all the 'Q' blocks will be on one =
disk.
As every write needs to update a Q block, every write will write to that di=
sk.

With our current RAID6 implementation that probably isn't a big cost - for
any write, we need to either read from or write to each disk anyway.

Anyway: the only possible problem would be a performance problem, and I
really don't know what performance impact there is - if any.

>=20
> If anything, I think it is safe to use this layout for a while, e.g. in c=
ase
> you don't want to rebuild 'right now'. You can always change the layout t=
o the
> traditional one later, by issuing "--grow --layout=3Dnormalise". Or perha=
ps if
> you plan to add another disk soon, you can normalise it on that occasion,=
and
> still gain the benefit of only one full reshape.

Note that doing a normalise by itself later will be much slower than not
doing a preserve now.
Doing the normalise later when growing the the device again would be just as
fast as no doing the preserve now.

NeilBrown


>=20
> > Will the array have redundancy during the rebuild of the new drive?
>=20
> If you choose --layout=3Dpreserve, your array immediately becomes a RAID6=
with
> one rebuilding drive. So this is the kind of redundancy you will have dur=
ing
> that rebuild - tolerance of up to one more (among the "old" drives) failu=
re,
> in other words, identical to what you currently have with RAID5.
>=20


--Sig_/bOzDzmoY7xHxe0zBmzkWxEM
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.16 (GNU/Linux)

iD8DBQFNpMEKG5fc6gV+Wb0RAi6pAJwM3W7XrMeBg3PC7x4mIviX3obZdACg 0fXk
uHuwg43UikEvv2Cdb7IK5EU=
=Txai
-----END PGP SIGNATURE-----

--Sig_/bOzDzmoY7xHxe0zBmzkWxEM--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID5

am 12.04.2011 23:53:44 von mathias.buren

On 12 April 2011 22:15, NeilBrown wrote:
> On Wed, 13 Apr 2011 00:22:38 +0600 Roman Mamedov wrot=
e:
>
>> On Tue, 12 Apr 2011 18:21:13 +0100
>> Mathias Burén wrote:
>>
>> > If I use --layout=3Dpreserve , what impact will that have?
>> > If I preserve the layout, what is the final result of the array
>> > compared to not preserving it?
>>
>> Neil wrote about this on his blog:
>> "It is a very similar process that can now be used to convert a RAID=
5 to a
>> RAID6. We first change the RAID5 to RAID6 with a non-standard layout=
that has
>> the parity blocks distributed as normal, but the Q blocks all on the=
last
>> device (a new device). So this is RAID6 using the RAID6 driver, but =
with a
>> non-RAID6 layout. So we "simply" change the layout and the job is do=
ne."
>> http://neil.brown.name/blog/20090817000931
>>
>> Admittedly it is not completely clear to me what are the long-term d=
ownsides of
>> this layout. As I understand it does fully provide the RAID6-level r=
edundancy.
>> Perhaps just the performance will suffer a bit? Maybe someone can ex=
plain this
>> more.
>
> If you specify --layout=3Dpreserve, then all the 'Q' blocks will be o=
n one disk.
> As every write needs to update a Q block, every write will write to t=
hat disk.
>
> With our current RAID6 implementation that probably isn't a big cost =
- for
> any write, we need to either read from or write to each disk anyway.
>
> Anyway:  the only possible problem would be a performance proble=
m, and I
> really don't know what performance impact there is - if any.
>
>>
>> If anything, I think it is safe to use this layout for a while, e.g.=
in case
>> you don't want to rebuild 'right now'. You can always change the lay=
out to the
>> traditional one later, by issuing "--grow --layout=3Dnormalise". Or =
perhaps if
>> you plan to add another disk soon, you can normalise it on that occa=
sion, and
>> still gain the benefit of only one full reshape.
>
> Note that doing a normalise by itself later will be much slower than =
not
> doing a preserve now.
> Doing the normalise later when growing the the device again would be =
just as
> fast as no doing the preserve now.
>
> NeilBrown
>
>
>>
>> >  Will the array have redundancy during the rebuild of the new=
drive?
>>
>> If you choose --layout=3Dpreserve, your array immediately becomes a =
RAID6 with
>> one rebuilding drive. So this is the kind of redundancy you will hav=
e during
>> that rebuild - tolerance of up to one more (among the "old" drives) =
failure,
>> in other words, identical to what you currently have with RAID5.
>>
>
>

Right, so using --preserve seems like a sane and good option. Thanks
for the info, I'll let you know what happens, HDD should arrive the
next few days.

// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID6

am 13.04.2011 13:44:48 von John Robinson

(Subject line amended by me :-)

On 12/04/2011 17:56, Mathias Burén wrote:
[...]
> I'm approaching over 6.5TB of data, and with an array this large I'd
> like to migrate to RAID6 for a bit more safety. I'm just checking if =
I
> understand this correctly, this is how to do it:
>
> * Add a HDD to the array as a hot spare:
> mdadm --manage /dev/md0 --add /dev/sdh1
>
> * Migrate the array to RAID6:
> mdadm --grow /dev/md0 --raid-devices 7 --level 6

You will need a --backup-file to do this, on another device. Since you=20
are keeping the same number of data discs before and after the reshape,=
=20
the backup file will be needed throughout the reshape, so the reshape=20
will take perhaps twice as long as a grow or shrink. If your backup-fil=
e=20
is on the same disc(s) as md0 is (e.g. on another partition or array=20
made up of other partitions on the same disc(s)), it will take way=20
longer (gazillions of seeks), so I'd recommend a separate drive or if=20
you have one a small SSD for the backup file.

Doing the above with --layout=3Dpreserve will save you doing the reshap=
e=20
so you won't need the backup file, but there will still be an initial=20
sync of the Q parity, and the layout will be RAID4-alike with all the Q=
=20
parity on one drive so it's possible its performance will be RAID4-alik=
e=20
too i.e. small writes never faster than the parity drive. Having said=20
that, streamed writes can still potentially go as fast as your 5 data=20
discs, as per your RAID5. In practice, I'd be surprised if it was faste=
r=20
than about twice the speed of a single drive (the same as your current=20
RAID5), and as Neil Brown notes in his reply, RAID6 doesn't currently=20
have the read-modify-write optimisation for small writes so small write=
=20
performance is liable to be even poorer than your RAID5 in either layou=
t.

You will never lose any redundancy in either of the above, but you won'=
t=20
gain RAID6 double redundancy until the reshape (or Q-drive sync with=20
--layout=3Dpreserve) has completed - just the same as if you were=20
replacing a dead drive in an existing RAID6.

Hope the above helps!

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID6

am 22.04.2011 11:39:07 von mathias.buren

On 13 April 2011 12:44, John Robinson =
wrote:
> (Subject line amended by me :-)
>
> On 12/04/2011 17:56, Mathias Burén wrote:
> [...]
>>
>> I'm approaching over 6.5TB of data, and with an array this large I'd
>> like to migrate to RAID6 for a bit more safety. I'm just checking if=
I
>> understand this correctly, this is how to do it:
>>
>> * Add a HDD to the array as a hot spare:
>> mdadm --manage /dev/md0 --add /dev/sdh1
>>
>> * Migrate the array to RAID6:
>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>
> You will need a --backup-file to do this, on another device. Since yo=
u are
> keeping the same number of data discs before and after the reshape, t=
he
> backup file will be needed throughout the reshape, so the reshape wil=
l take
> perhaps twice as long as a grow or shrink. If your backup-file is on =
the
> same disc(s) as md0 is (e.g. on another partition or array made up of=
other
> partitions on the same disc(s)), it will take way longer (gazillions =
of
> seeks), so I'd recommend a separate drive or if you have one a small =
SSD for
> the backup file.
>
> Doing the above with --layout=3Dpreserve will save you doing the resh=
ape so
> you won't need the backup file, but there will still be an initial sy=
nc of
> the Q parity, and the layout will be RAID4-alike with all the Q parit=
y on
> one drive so it's possible its performance will be RAID4-alike too i.=
e.
> small writes never faster than the parity drive. Having said that, st=
reamed
> writes can still potentially go as fast as your 5 data discs, as per =
your
> RAID5. In practice, I'd be surprised if it was faster than about twic=
e the
> speed of a single drive (the same as your current RAID5), and as Neil=
Brown
> notes in his reply, RAID6 doesn't currently have the read-modify-writ=
e
> optimisation for small writes so small write performance is liable to=
be
> even poorer than your RAID5 in either layout.
>
> You will never lose any redundancy in either of the above, but you wo=
n't
> gain RAID6 double redundancy until the reshape (or Q-drive sync with
> --layout=3Dpreserve) has completed - just the same as if you were rep=
lacing a
> dead drive in an existing RAID6.
>
> Hope the above helps!
>
> Cheers,
>
> John.
>
>

Hi,

Thanks for the replies. Allright, here we go;

$ mdadm --grow /dev/md0 --bitmap=3Dnone
$ mdadm --manage /dev/md0 --add /dev/sde1
$ mdadm --grow /dev/md0 --verbose --layout=3Dpreserve --raid-devices =
7
--level 6 --backup-file=3D/root/md-raid5-to-raid6-backupfile.bin
mdadm: level of /dev/md0 changed to raid6

$ cat /proc/mdstat

Fri Apr
22 10:37:44 2011

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sdb1=
[1]
9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
[7/6] [UUUUUU_]
[>....................] reshape =3D 0.0% (224768/1950351360)
finish=3D8358.5min speed=3D3888K/sec

unused devices:

And in dmesg:


--- level:6 rd:7 wd:6
disk 0, o:1, dev:sdg1
disk 1, o:1, dev:sdb1
disk 2, o:1, dev:sdd1
disk 3, o:1, dev:sdc1
disk 4, o:1, dev:sdf1
disk 5, o:1, dev:sdh1
RAID conf printout:
--- level:6 rd:7 wd:6
disk 0, o:1, dev:sdg1
disk 1, o:1, dev:sdb1
disk 2, o:1, dev:sdd1
disk 3, o:1, dev:sdc1
disk 4, o:1, dev:sdf1
disk 5, o:1, dev:sdh1
disk 6, o:1, dev:sde1
md: reshape of RAID array md0
md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than
200000 KB/sec) for reshape.
md: using 128k window, over a total of 1950351360 blocks.

IIRC there's a way to speed up the migration, by using a larger cache
value somewhere, no?

Thanks,
Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID6

am 22.04.2011 12:05:44 von mathias.buren

On 22 April 2011 10:39, Mathias Burén wr=
ote:
> On 13 April 2011 12:44, John Robinson > wrote:
>> (Subject line amended by me :-)
>>
>> On 12/04/2011 17:56, Mathias Burén wrote:
>> [...]
>>>
>>> I'm approaching over 6.5TB of data, and with an array this large I'=
d
>>> like to migrate to RAID6 for a bit more safety. I'm just checking i=
f I
>>> understand this correctly, this is how to do it:
>>>
>>> * Add a HDD to the array as a hot spare:
>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>
>>> * Migrate the array to RAID6:
>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>
>> You will need a --backup-file to do this, on another device. Since y=
ou are
>> keeping the same number of data discs before and after the reshape, =
the
>> backup file will be needed throughout the reshape, so the reshape wi=
ll take
>> perhaps twice as long as a grow or shrink. If your backup-file is on=
the
>> same disc(s) as md0 is (e.g. on another partition or array made up o=
f other
>> partitions on the same disc(s)), it will take way longer (gazillions=
of
>> seeks), so I'd recommend a separate drive or if you have one a small=
SSD for
>> the backup file.
>>
>> Doing the above with --layout=3Dpreserve will save you doing the res=
hape so
>> you won't need the backup file, but there will still be an initial s=
ync of
>> the Q parity, and the layout will be RAID4-alike with all the Q pari=
ty on
>> one drive so it's possible its performance will be RAID4-alike too i=
e.
>> small writes never faster than the parity drive. Having said that, s=
treamed
>> writes can still potentially go as fast as your 5 data discs, as per=
your
>> RAID5. In practice, I'd be surprised if it was faster than about twi=
ce the
>> speed of a single drive (the same as your current RAID5), and as Nei=
l Brown
>> notes in his reply, RAID6 doesn't currently have the read-modify-wri=
te
>> optimisation for small writes so small write performance is liable t=
o be
>> even poorer than your RAID5 in either layout.
>>
>> You will never lose any redundancy in either of the above, but you w=
on't
>> gain RAID6 double redundancy until the reshape (or Q-drive sync with
>> --layout=3Dpreserve) has completed - just the same as if you were re=
placing a
>> dead drive in an existing RAID6.
>>
>> Hope the above helps!
>>
>> Cheers,
>>
>> John.
>>
>>
>
> Hi,
>
> Thanks for the replies. Allright, here we go;
>
>  $ mdadm --grow /dev/md0 --bitmap=3Dnone
>  $ mdadm --manage /dev/md0 --add /dev/sde1
>  $ mdadm --grow /dev/md0 --verbose --layout=3Dpreserve  --r=
aid-devices 7
> --level 6 --backup-file=3D/root/md-raid5-to-raid6-backupfile.bin
> mdadm: level of /dev/md0 changed to raid6
>
> $ cat /proc/mdstat
>
>                    =
                    =
                    F=
ri Apr
> 22 10:37:44 2011
>
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] sd=
b1[1]
>      9751756800 blocks super 1.2 level 6, 64k chunk, a=
lgorithm 18
> [7/6] [UUUUUU_]
>      [>....................]  reshape =3D  0=
0% (224768/1950351360)
> finish=3D8358.5min speed=3D3888K/sec
>
> unused devices:
>
> And in dmesg:
>
>
>  --- level:6 rd:7 wd:6
>  disk 0, o:1, dev:sdg1
>  disk 1, o:1, dev:sdb1
>  disk 2, o:1, dev:sdd1
>  disk 3, o:1, dev:sdc1
>  disk 4, o:1, dev:sdf1
>  disk 5, o:1, dev:sdh1
> RAID conf printout:
>  --- level:6 rd:7 wd:6
>  disk 0, o:1, dev:sdg1
>  disk 1, o:1, dev:sdb1
>  disk 2, o:1, dev:sdd1
>  disk 3, o:1, dev:sdc1
>  disk 4, o:1, dev:sdf1
>  disk 5, o:1, dev:sdh1
>  disk 6, o:1, dev:sde1
> md: reshape of RAID array md0
> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
> md: using maximum available idle IO bandwidth (but not more than
> 200000 KB/sec) for reshape.
> md: using 128k window, over a total of 1950351360 blocks.
>
> IIRC there's a way to speed up the migration, by using a larger cache
> value somewhere, no?
>
> Thanks,
> Mathias
>

Increasing stripe cache on the md device from 1027 to 32k or 16k
didn't make a difference, still around 3800KB/s reshape. Oh well,
we'll see if it's still alive in 5.5 days!

Cheers,
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Growing 6 HDD RAID5 to 7 HDD RAID6

am 30.04.2011 00:45:14 von mathias.buren

On 22 April 2011 11:05, Mathias Burén wr=
ote:
> On 22 April 2011 10:39, Mathias Burén =
wrote:
>> On 13 April 2011 12:44, John Robinson k> wrote:
>>> (Subject line amended by me :-)
>>>
>>> On 12/04/2011 17:56, Mathias Burén wrote:
>>> [...]
>>>>
>>>> I'm approaching over 6.5TB of data, and with an array this large I=
'd
>>>> like to migrate to RAID6 for a bit more safety. I'm just checking =
if I
>>>> understand this correctly, this is how to do it:
>>>>
>>>> * Add a HDD to the array as a hot spare:
>>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>>
>>>> * Migrate the array to RAID6:
>>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>>
>>> You will need a --backup-file to do this, on another device. Since =
you are
>>> keeping the same number of data discs before and after the reshape,=
the
>>> backup file will be needed throughout the reshape, so the reshape w=
ill take
>>> perhaps twice as long as a grow or shrink. If your backup-file is o=
n the
>>> same disc(s) as md0 is (e.g. on another partition or array made up =
of other
>>> partitions on the same disc(s)), it will take way longer (gazillion=
s of
>>> seeks), so I'd recommend a separate drive or if you have one a smal=
l SSD for
>>> the backup file.
>>>
>>> Doing the above with --layout=3Dpreserve will save you doing the re=
shape so
>>> you won't need the backup file, but there will still be an initial =
sync of
>>> the Q parity, and the layout will be RAID4-alike with all the Q par=
ity on
>>> one drive so it's possible its performance will be RAID4-alike too =
i.e.
>>> small writes never faster than the parity drive. Having said that, =
streamed
>>> writes can still potentially go as fast as your 5 data discs, as pe=
r your
>>> RAID5. In practice, I'd be surprised if it was faster than about tw=
ice the
>>> speed of a single drive (the same as your current RAID5), and as Ne=
il Brown
>>> notes in his reply, RAID6 doesn't currently have the read-modify-wr=
ite
>>> optimisation for small writes so small write performance is liable =
to be
>>> even poorer than your RAID5 in either layout.
>>>
>>> You will never lose any redundancy in either of the above, but you =
won't
>>> gain RAID6 double redundancy until the reshape (or Q-drive sync wit=
h
>>> --layout=3Dpreserve) has completed - just the same as if you were r=
eplacing a
>>> dead drive in an existing RAID6.
>>>
>>> Hope the above helps!
>>>
>>> Cheers,
>>>
>>> John.
>>>
>>>
>>
>> Hi,
>>
>> Thanks for the replies. Allright, here we go;
>>
>>  $ mdadm --grow /dev/md0 --bitmap=3Dnone
>>  $ mdadm --manage /dev/md0 --add /dev/sde1
>>  $ mdadm --grow /dev/md0 --verbose --layout=3Dpreserve  --=
raid-devices 7
>> --level 6 --backup-file=3D/root/md-raid5-to-raid6-backupfile.bin
>> mdadm: level of /dev/md0 changed to raid6
>>
>> $ cat /proc/mdstat
>>
>>                    =
                    =
                    F=
ri Apr
>> 22 10:37:44 2011
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] s=
db1[1]
>>      9751756800 blocks super 1.2 level 6, 64k chunk, =
algorithm 18
>> [7/6] [UUUUUU_]
>>      [>....................]  reshape =3D  =
0.0% (224768/1950351360)
>> finish=3D8358.5min speed=3D3888K/sec
>>
>> unused devices:
>>
>> And in dmesg:
>>
>>
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>> RAID conf printout:
>>  --- level:6 rd:7 wd:6
>>  disk 0, o:1, dev:sdg1
>>  disk 1, o:1, dev:sdb1
>>  disk 2, o:1, dev:sdd1
>>  disk 3, o:1, dev:sdc1
>>  disk 4, o:1, dev:sdf1
>>  disk 5, o:1, dev:sdh1
>>  disk 6, o:1, dev:sde1
>> md: reshape of RAID array md0
>> md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
>> md: using maximum available idle IO bandwidth (but not more than
>> 200000 KB/sec) for reshape.
>> md: using 128k window, over a total of 1950351360 blocks.
>>
>> IIRC there's a way to speed up the migration, by using a larger cach=
e
>> value somewhere, no?
>>
>> Thanks,
>> Mathias
>>
>
> Increasing stripe cache on the md device from 1027 to 32k or 16k
> didn't make a difference, still around 3800KB/s reshape. Oh well,
> we'll see if it's still alive in 5.5 days!
>
> Cheers,
>

On 22 April 2011 11:05, Mathias Burén wr=
ote:
> On 22 April 2011 10:39, Mathias Burén =
wrote:
>> On 13 April 2011 12:44, John Robinson k> wrote:
>>> (Subject line amended by me :-)
>>>
>>> On 12/04/2011 17:56, Mathias Burén wrote:
>>> [...]
>>>>
>>>> I'm approaching over 6.5TB of data, and with an array this large I=
'd
>>>> like to migrate to RAID6 for a bit more safety. I'm just checking =
if I
>>>> understand this correctly, this is how to do it:
>>>>
>>>> * Add a HDD to the array as a hot spare:
>>>> mdadm --manage /dev/md0 --add /dev/sdh1
>>>>
>>>> * Migrate the array to RAID6:
>>>> mdadm --grow /dev/md0 --raid-devices 7 --level 6
>>>
>>> You will need a --backup-file to do this, on another device. Since =
you are
>>> keeping the same number of data discs before and after the reshape,=
the
>>> backup file will be needed throughout the reshape, so the reshape w=
ill take
>>> perhaps twice as long as a grow or shrink. If your backup-file is o=
n the
>>> same disc(s) as md0 is (e.g. on another partition or array made up =
of other
>>> partitions on the same disc(s)), it will take way longer (gazillion=
s of
>>> seeks), so I'd recommend a separate drive or if you have one a smal=
l SSD for
>>> the backup file.
>>>
>>> Doing the above with --layout=3Dpreserve will save you doing the re=
shape so
>>> you won't need the backup file, but there will still be an initial =
sync of
>>> the Q parity, and the layout will be RAID4-alike with all the Q par=
ity on
>>> one drive so it's possible its performance will be RAID4-alike too =
i.e.
>>> small writes never faster than the parity drive. Having said that, =
streamed
>>> writes can still potentially go as fast as your 5 data discs, as pe=
r your
>>> RAID5. In practice, I'd be surprised if it was faster than about tw=
ice the
>>> speed of a single drive (the same as your current RAID5), and as Ne=
il Brown
>>> notes in his reply, RAID6 doesn't currently have the read-modify-wr=
ite
>>> optimisation for small writes so small write performance is liable =
to be
>>> even poorer than your RAID5 in either layout.
>>>
>>> You will never lose any redundancy in either of the above, but you =
won't
>>> gain RAID6 double redundancy until the reshape (or Q-drive sync wit=
h
>>> --layout=3Dpreserve) has completed - just the same as if you were r=
eplacing a
>>> dead drive in an existing RAID6.
>>>
>>> Hope the above helps!
>>>
>>> Cheers,
>>>
>>> John.
>>>
>>>
>>
>> Hi,
>>
>> Thanks for the replies. Allright, here we go;
>>
>> $ mdadm --grow /dev/md0 --bitmap=3Dnone
>> $ mdadm --manage /dev/md0 --add /dev/sde1
>> $ mdadm --grow /dev/md0 --verbose --layout=3Dpreserve --raid-devic=
es 7
>> --level 6 --backup-file=3D/root/md-raid5-to-raid6-backupfile.bin
>> mdadm: level of /dev/md0 changed to raid6
>>
>> $ cat /proc/mdstat
>>
>> Fri Apr
>> 22 10:37:44 2011
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid6 sde1[7] sdg1[0] sdh1[6] sdf1[5] sdc1[3] sdd1[4] s=
db1[1]
>> 9751756800 blocks super 1.2 level 6, 64k chunk, algorithm 18
>> [7/6] [UUUUUU_]
>> [>....................] reshape =3D 0.0% (224768/1950351360)
>> finish=3D8358.5min speed=3D3888K/sec
>>
>> unused devices:
>>
>> And in dmesg:
>>
>>
>> --- level:6 rd:7 wd:6
>> disk 0, o:1, dev:sdg1
>> disk 1, o:1, dev:sdb1
>> disk 2, o:1, dev:sdd1
>> disk 3, o:1, dev:sdc1
>> disk 4, o:1, dev:sdf1
>> disk 5, o:1, dev:sdh1
>> RAID conf printout:
>> --- level:6 rd:7 wd:6
>> disk 0, o:1, dev:sdg1
>> disk 1, o:1, dev:sdb1
>> disk 2, o:1, dev:sdd1
>> disk 3, o:1, dev:sdc1
>> disk 4, o:1, dev:sdf1
>> disk 5, o:1, dev:sdh1
>> disk 6, o:1, dev:sde1
>> md: reshape of RAID array md0
>> md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
>> md: using maximum available idle IO bandwidth (but not more than
>> 200000 KB/sec) for reshape.
>> md: using 128k window, over a total of 1950351360 blocks.
>>
>> IIRC there's a way to speed up the migration, by using a larger cach=
e
>> value somewhere, no?
>>
>> Thanks,
>> Mathias
>>
>
> Increasing stripe cache on the md device from 1027 to 32k or 16k
> didn't make a difference, still around 3800KB/s reshape. Oh well,
> we'll see if it's still alive in 5.5 days!
>
> Cheers,
>

It's alive!

md: md0: reshape done.
RAID conf printout:
--- level:6 rd:7 wd:7
disk 0, o:1, dev:sdg1
disk 1, o:1, dev:sdb1
disk 2, o:1, dev:sdd1
disk 3, o:1, dev:sdc1
disk 4, o:1, dev:sdf1
disk 5, o:1, dev:sdh1
disk 6, o:1, dev:sde1

$ sudo mdadm -D /dev/md0
Password:
/dev/md0:
Version : 1.2
Creation Time : Tue Oct 19 08:58:41 2010
Raid Level : raid6
Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent

Update Time : Fri Apr 29 23:44:50 2011
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : ion:0 (local to host ion)
UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
Events : 6158702

Number Major Minor RaidDevice State
0 8 97 0 active sync /dev/sdg1
1 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
5 8 81 4 active sync /dev/sdf1
6 8 113 5 active sync /dev/sdh1
7 8 65 6 active sync /dev/sde1

Yay :) thanks for rgeat software! Cheers,

/ Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html