Raid 5 to Raid 1 (half of the data not required)
Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 01:41:11 von Mike Viau
Hello,
I am trying to convert my currently running raid 5 array into a raid 1.=
All the guides I can see online are for the reverse direction in which=
one is converting/migrating a raid 1 to raid 5. I have intentionally o=
nly allocated exactly half of the total raid 5 size is. I would like to=
create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the ra=
id 5 running with the same drives plus /dev/sde1. Is this possible, I w=
ish to have the data redundantly over two hard drive without the parity=
which is present in raid 5?
Thanks for any help in advance :)
# mdadm -D /dev/md0
/dev/md0:
=A0 Version : 1.2
=A0 Creation Time : Mon Dec 20 09:48:07 2010
Raid Level : raid5
Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
=A0 Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
Raid Devices : 3
=A0 Total Devices : 3
=A0 Persistence : Superblock is persistent
=A0 Update Time : Tue Aug 23 11:34:00 2011
=A0 State : clean
=A0Active Devices : 3
Working Devices : 3
=A0Failed Devices : 0
=A0 Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : HOST:0=A0 (local to host HOST)
UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81=
e9
Events : 55750
=A0 Number Major Minor RaidDevice State
0 8 17 =
=A0 0 =A0 active sync /dev/sdb1
1 8 33 =
=A0 1 =A0 active sync /dev/sdc1
3 8 65 =
=A0 2 =A0 active sync /dev/sde1
-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 02:46:49 von NeilBrown
On Tue, 23 Aug 2011 19:41:11 -0400 Mike Viau wr=
ote:
>=20
> Hello,
>=20
> I am trying to convert my currently running raid 5 array into a raid =
1. All the guides I can see online are for the reverse direction in whi=
ch one is converting/migrating a raid 1 to raid 5. I have intentionally=
only allocated exactly half of the total raid 5 size is. I would like =
to create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the =
raid 5 running with the same drives plus /dev/sde1. Is this possible, I=
wish to have the data redundantly over two hard drive without the pari=
ty which is present in raid 5?
Yes this is possible, though you will need a fairly new kernel (late 30=
's at
least) and mdadm.
And you need to be running ext3 because I think it is the only one you =
can
shrink.
1/ umount filesystem
2/ resize2fs /dev/md0 490G
This makes the array use definitely less than half the space. It =
is=20
safest to leave a bit of slack for relocated metadata or something=
If you don't make this small enough some later step will fail, and=
=20
you can then revert back to here and try again.
3/ mdadm --grow --array-size=3D490G /dev/md0
This makes the array appear smaller without actually destroying any=
data.
4/ fsck -f /dev/md0
This makes sure the filesystem inside the shrunk array is still OK.
If there is a problem you can "mdadm --grow" to a bigger size and c=
heck
again.=20
Only if the above all looks ok, continue. You can remount the filesyst=
em at
this stage if you want to.
5/ mdadm --grow /dev/md0 --raid-disks=3D2
If you didn't make the array-size small enough, this will fail.
If you did it will start a 'reshape' which shuffles all the data ar=
ound
so it fits (With parity) on just two devices.
6/ mdadm --wait /dev/md0
7/ mdadm --grow /dev/md0 --level=3D1
This instantly converts a 2-device RAID5 to a 2-device RAID1.
8/ mdadm --grow /dev/md0 --array-size=3Dmax
9/ resize2fs /dev/md0
This will grow the filesystem up to fill the available space.
All done.
Please report success or failure or any interesting observations.
NeilBrown
>=20
> Thanks for any help in advance :)
>=20
>=20
> # mdadm -D /dev/md0
> /dev/md0:
> =A0 Version : 1.2
> =A0 Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
> =A0 Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
> Raid Devices : 3
> =A0 Total Devices : 3
> =A0 Persistence : Superblock is persistent
>=20
> =A0 Update Time : Tue Aug 23 11:34:00 2011
> =A0 State : clean
> =A0Active Devices : 3
> Working Devices : 3
> =A0Failed Devices : 0
> =A0 Spare Devices : 0
>=20
> Layout : left-symmetric
> Chunk Size : 512K
>=20
> Name : HOST:0=A0 (local to host HOST)
> UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f=
81e9
> Events : 55750
>=20
> =A0 Number Major Minor RaidDevice State
> 0 8 17 =A0=
0 =A0 active sync /dev/sdb1
> 1 8 33 =A0=
1 =A0 active sync /dev/sdc1
> 3 8 65 =A0=
2 =A0 active sync /dev/sde1
>=20
>=20
> -M
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 04:39:38 von NeilBrown
On Tue, 23 Aug 2011 22:18:12 -0400 Mike Viau wrote:
>
> > On Wed, 24 Aug 2011 wrote:
> > > On Tue, 23 Aug 2011 19:41:11 -0400 Mike Viau wrote:
> >
> > >
> > > Hello,
> > >
> > > I am trying to convert my currently running raid 5 array into a raid 1. All the guides I can see online are for the reverse direction in which one is converting/migrating a raid 1 to raid 5. I have intentionally only allocated exactly half of the total raid 5 size is. I would like to create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid 5 running with the same drives plus /dev/sde1. Is this possible, I wish to have the data redundantly over two hard drive without the parity which is present in raid 5?
> >
> > Yes this is possible, though you will need a fairly new kernel (late 30's at
> > least) and mdadm.
> >
>
> In your opinion is Debian 2.6.32-35 going to cut it? Not very late 30's, with mdadm - v3.1.4 - 31st August 2010
Should be OK. The core functionality with in in 2.6.29. There have been a
few bug fixes since then but they are for corner cases that you probably
won't hit.
>
> > And you need to be running ext3 because I think it is the only one you can
> > shrink.
> >
> > 1/ umount filesystem
> > 2/ resize2fs /dev/md0 490G
> > This makes the array use definitely less than half the space. It is
> > safest to leave a bit of slack for relocated metadata or something.
> > If you don't make this small enough some later step will fail, and
> > you can then revert back to here and try again.
> >
>
>
> The file system used was ext4 which is mounted off of a LVM logical volume inside of a virtual machine :P
Nice of you to keep it simple...
ext4 isn't a problem. LVM shouldn't be, but it adds an extra step. You
first shrink the fs, then the lv, then the pv, then the RAID...
>
> I am still able to run the first two steps, but am considered about data loss on the underlying ext4 filesystem if I shrink the filesystem too much, 490G may not be possible. Other than that the following steps sound 'do-able' if the re-size works.
>
> > 3/ mdadm --grow --array-size=490G /dev/md0
> > This makes the array appear smaller without actually destroying any data.
> > 4/ fsck -f /dev/md0
> > This makes sure the filesystem inside the shrunk array is still OK.
> > If there is a problem you can "mdadm --grow" to a bigger size and check
> > again.
> >
> > Only if the above all looks ok, continue. You can remount the filesystem at
> > this stage if you want to.
> >
> > 5/ mdadm --grow /dev/md0 --raid-disks=2
> >
> > If you didn't make the array-size small enough, this will fail.
> > If you did it will start a 'reshape' which shuffles all the data around
> > so it fits (With parity) on just two devices.
> >
> > 6/ mdadm --wait /dev/md0
> > 7/ mdadm --grow /dev/md0 --level=1
> > This instantly converts a 2-device RAID5 to a 2-device RAID1.
> > 8/ mdadm --grow /dev/md0 --array-size=max
> > 9/ resize2fs /dev/md0
> > This will grow the filesystem up to fill the available space.
> >
> > All done.
> >
> > Please report success or failure or any interesting observations.
> >
>
> I am not sure how crack-pot of a solution this would be, but could I:
>
> 1/ mdadm -r /dev/md0 /dev/sde1
> Remove /dev/sde1 from the raid 5 array
Here you have lost your redundancy .... your choice I guess.
>
> 2/ dd if=/dev/zero of=/dev/sde1 bs=512 count=1
> This clears the msdos mbr and clears the partitions
>
> 3/ parted, fdisk or cfdisk to create a new 1TB (or less is possible as well) ext4 partition on /dev/sde
>
> 4/ mkfs.ext4 /dev/sde1
>
> 5/ cp -R {mounted location of degraded /dev/md0 partition} {mounted location of /dev/sde1 partition}
> Aka backup
>
> 6/ mdadm --zero-superblock on /dev/sdb1 and /dev/sdc1
> Prep the two drive for new raid array
Probably want to stop the array (mdadm -S /dev/md0) before you do that.
>
> 7/ mdadm create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
> Create new raid 1 array on drives
>
> 8/ create LVM (pv,vg, and lv)
>
> 9/ parted, fdisk or cfdisk to create a new 1TB ext4 partition on LVM
>
> 10/ mkfs.ext4 on LV on /dev/md0
>
> 11/ cp -R {mounted location of /dev/sde1 partition} {mounted location of new /dev/md0 partition}
>
> Any thought/suggestion/correction to this proposed idea?
Doing two copies seems a bit wasteful.
- fail/remove sdb1
- create a 1-device RAID1 on sdb1 (or a 2 device RAID1 with a missing device).
- do the lvm, mkfs
- copy from old filesystem to the new filesystem
- stop the old array.
- add sdc1 to the new RAID1.
- If you made it a 1-device RAID1, --grow it to 2 devices.
Only one copy operation needed.
NeilBrown
>
>
> Thanks again :)
>
> > >
> > >
> > > # mdadm -D /dev/md0
> > > /dev/md0:
> > > Version : 1.2
> > > Creation Time : Mon Dec 20 09:48:07 2010
> > > Raid Level : raid5
> > > Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
> > > Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
> > > Raid Devices : 3
> > > Total Devices : 3
> > > Persistence : Superblock is persistent
> > >
> > > Update Time : Tue Aug 23 11:34:00 2011
> > > State : clean
> > > Active Devices : 3
> > > Working Devices : 3
> > > Failed Devices : 0
> > > Spare Devices : 0
> > >
> > > Layout : left-symmetric
> > > Chunk Size : 512K
> > >
> > > Name : HOST:0 (local to host HOST)
> > > UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> > > Events : 55750
> > >
> > > Number Major Minor RaidDevice State
> > > 0 8 17 0 active sync /dev/sdb1
> > > 1 8 33 1 active sync /dev/sdc1
> > > 3 8 65 2 active sync /dev/sde1
> > >
> > >
> > > -M
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 09:34:54 von Robin Hill
--x+6KMIRAuhnl3hBn
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On Wed Aug 24, 2011 at 12:39:38 +1000, NeilBrown wrote:
> On Tue, 23 Aug 2011 22:18:12 -0400 Mike Viau wrot=
e:
>=20
> >=20
> > I am not sure how crack-pot of a solution this would be, but could I:=
=20
> >=20
> > 1/ mdadm -r /dev/md0 /dev/sde1
> > Remove /dev/sde1 from the raid 5 array
>=20
> Here you have lost your redundancy .... your choice I guess.
>=20
> >=20
> > 2/ dd if=3D/dev/zero of=3D/dev/sde1 bs=3D512 count=3D1
> > This clears the msdos mbr and clears the partitions
> >=20
> > 3/ parted, fdisk or cfdisk to create a new 1TB (or less is possible as =
well) ext4 partition on /dev/sde
> >=20
> > 4/ mkfs.ext4 /dev/sde1
> >=20
> > 5/ cp -R {mounted location of degraded /dev/md0 partition} {mounted loc=
ation of /dev/sde1 partition}
> > Aka backup
> >=20
If you're wanting to backup, "cp -a" would be better than "cp -R",
otherwise you lose attributes & symlinks.
Cheers,
Robin
--=20
___ =20
( ' } | Robin Hill |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
--x+6KMIRAuhnl3hBn
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (GNU/Linux)
iEYEARECAAYFAk5UqZ0ACgkQShxCyD40xBIzrQCdGWxgKe6wrDryoij1qHzq xHwK
ZjgAnAqmmivAsMgT2dflyyDtzSOXydBL
=ef6g
-----END PGP SIGNATURE-----
--x+6KMIRAuhnl3hBn--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 10:03:55 von Gordon Henderson
On Tue, 23 Aug 2011, Mike Viau wrote:
>
> Hello,
>
> I am trying to convert my currently running raid 5 array into a raid 1.
> All the guides I can see online are for the reverse direction in which
> one is converting/migrating a raid 1 to raid 5. I have intentionally
> only allocated exactly half of the total raid 5 size is. I would like to
> create the raid 1 over /dev/sdb1 and /dev/sdc1 with the data on the raid
> 5 running with the same drives plus /dev/sde1. Is this possible, I wish
> to have the data redundantly over two hard drive without the parity
> which is present in raid 5?
3-drive RAID5 -> 2-drive RAID1...
Neils solution is interesting (and he should know :), but personally, I'd
probably take a much simpler approach without FS resizing, etc. so this
allows you to change filesystems or use something other than ext3...
So start with taking a backup :)
Then verify the existing RAID5 array (echo "check" > .. etc.) and wait...
Then "break" the array by failing /dev/sdb1
Create a single-drive RAID1 using /dev/sdb1 and "missing"
mkfs the filesystem of choice on this new MD drive and mount it.
use cp -a (or rsync) to copy data from the raid5 array to the new raid1
array.
stop the raid5
hot-add /dev/sdc1 into the new raid1
then fiddle with whatever boot options, etc. to make sure the new drive is
assembled at boot time, mounted, etc.
This isn't as "glamorous" as Neils method involving lots of mdadm
commands, shrinks and grows, but sometimes it's good to keep things at
a simpler level?
Well, it works for me, anyway!
Gordon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 10:21:32 von Mikael Abrahamsson
On Wed, 24 Aug 2011, Gordon Henderson wrote:
> This isn't as "glamorous" as Neils method involving lots of mdadm
> commands, shrinks and grows, but sometimes it's good to keep things at a
> simpler level?
Another way would be to add the new raid1 with missing drive to the lv,
and pvmove all extents off of the existing raid5 md pv, then vgreduce away
from it, stop the raid5, zero-superblock, and add one drive to add
redundancy for the raid1.
But that has little to do with linux raid, and all to do with LVM. It also
means you can do everything online since pvmove doesn't require to offline
anything.
--
Mikael Abrahamsson email: swmike@swm.pp.se
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 to Raid 1 (half of the data not required)
am 24.08.2011 10:42:35 von NeilBrown
On Wed, 24 Aug 2011 10:21:32 +0200 (CEST) Mikael Abrahamsson
wrote:
> On Wed, 24 Aug 2011, Gordon Henderson wrote:
>
> > This isn't as "glamorous" as Neils method involving lots of mdadm
> > commands, shrinks and grows, but sometimes it's good to keep things at a
> > simpler level?
>
> Another way would be to add the new raid1 with missing drive to the lv,
> and pvmove all extents off of the existing raid5 md pv, then vgreduce away
> from it, stop the raid5, zero-superblock, and add one drive to add
> redundancy for the raid1.
>
> But that has little to do with linux raid, and all to do with LVM. It also
> means you can do everything online since pvmove doesn't require to offline
> anything.
>
There are certainly lots of approaches. :-)
But every approach will require either coping or shrinking the filesystem and
as extX doesn't support online shrinking the filesystem will have to be
effectively off-line while that shrink happens.
(if you shrink by coping, then it could be technically on-line but it had
better not be written to).
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: Raid 5 to Raid 1 (half of the data not required)
am 26.08.2011 02:11:46 von Mike Viau
> On Wed, 24 Aug 2011 18:42:35 +1000 wrote:
> > On Wed, 24 Aug 2011 10:21:32 +0200 (CEST) Mikael Abrahamsson
e@swm.pp.se> wrote:
> > On Wed, 24 Aug 2011, Gordon Henderson wrote:
> >=20
> > > This isn't as "glamorous" as Neils method involving lots of mdadm=
=20
> > > commands, shrinks and grows, but sometimes it's good to keep thin=
gs at a=20
> > > simpler level?
> >=20
> > Another way would be to add the new raid1 with missing drive to the=
lv,=20
> > and pvmove all extents off of the existing raid5 md pv, then vgredu=
ce away=20
> > from it, stop the raid5, zero-superblock, and add one drive to add=20
> > redundancy for the raid1.
> >=20
> > But that has little to do with linux raid, and all to do with LVM. =
It also=20
> > means you can do everything online since pvmove doesn't require to =
offline=20
> > anything.
> >=20
>=20
> There are certainly lots of approaches. :-)
> But every approach will require either coping or shrinking the filesy=
stem and
> as extX doesn't support online shrinking the filesystem will have to =
be
> effectively off-line while that shrink happens.
> (if you shrink by coping, then it could be technically on-line but it=
had
> better not be written to).
>=20
Wow! Thank you so much everyone for your feedback, I am truly very grat=
eful :)=20
Before tackling this task I plan to delete some unnecessary files to ha=
ve less to backup, then make the all so important backup, and lastly at=
tempt the migration. I had to remember how I decided to build up the LV=
M on the RAID 5 array :)
Model: Linux Software RAID Array (md)
Disk /dev/md0: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number=A0 Start End Size =A0 Type Fi=
le system=A0 Flags
=A01 =A0 1049kB=A0 2000GB=A0 2000GB=A0 primary =A0=
=A0 lvm
OR
=A0--- Physical volume ---
=A0 PV Name /dev/md0p1
=A0 VG Name masterVG
=A0 PV Size 1.82 TiB / not us=
able 3.00 MiB
=A0 Allocatable yes
=A0 PE Size 4.00 MiB
=A0 Total PE =A0 476932
=A0 Free PE 261031
=A0 Allocated PE =A0 215901
=A0 PV UUID xiS8is-RR6D-Swre-=
IHQN-yGY2-cNmJ-wGGBY7
AND
=A0 --- Volume group ---
=A0 VG Name masterVG
=A0 System ID
=A0 Format =A0 lvm2
=A0 Metadata Areas =A0 1
=A0 Metadata Sequence No=A0 4
=A0 VG Access read/write
=A0 VG Status resizable
=A0 MAX LV =A0 0
=A0 Cur LV =A0 2
=A0 Open LV 2
=A0 Max PV =A0 0
=A0 Cur PV =A0 1
=A0 Act PV =A0 1
=A0 VG Size 1.82 TiB
=A0 PE Size 4.00 MiB
=A0 Total PE =A0 476932
=A0 Alloc PE / Size 215901 / 843.36 GiB
=A0 Free=A0 PE / Size 261031 / 1019.65 GiB
=A0 VG UUID eoZgIp-50Wb-Lrhg-=
Sawt-rWDV-YIDy-Ez2Glr
So it looks like the entire RAID 5 array is one LVM physical volume and=
then one volume group.
=A0 --- Logical volume ---
=A0 LV Name =A0 /dev/masterVG/=
backupLV
=A0 VG Name =A0 masterVG
=A0 LV UUID =A0 wc61ER-uoNn-yn=
XI-2v64-wpa8-ON3g-im4fo8
=A0 LV Write Access =A0 read/write
=A0 LV Status =A0 available
=A0 # open 2
=A0 LV Size =A0 700.00 GiB
=A0 Current LE 179200
=A0 Segments 1
=A0 Allocation inherit
=A0 Read ahead sectors auto
=A0 - currently set to 4096
=A0 Block device 254:9
The logical volume has a size of 700.00 GB, so this is less than the 1 =
TB size in which I plan the newly migrated RAID 1 mdadm array to be (wi=
th two 1 TB drives). I don't think I will therefore have any need to sh=
rink the ext4 filesystem, hopefully meaning I can complete the entire p=
rocess over some time while keeping the data available or online.
I remember that I had good reasons for using LVM, but I will have to ge=
t reacquainted again with the commands of LVM like pv/vg/lv[move/reduce=
]...
Thanks again to everyone for their help :D
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html