Raid 5 rebuild with only 2 spare devices
Raid 5 rebuild with only 2 spare devices
am 10.02.2011 19:03:30 von Thomas Heilberg
Hi!
Sorry for my bad English. I'm from Austria and this is also my first
"mailinglist-post".
I have a problem with my RAID5. The raid has only 1 active devices out
of 3. The other 2 devices are detected as spare.
This is what happens when I try to assemble the raid(I'm using loop
devices because I'm working with backup files):
root@backup-server:/media# mdadm --assemble --force /dev/md2 /dev/loop0
/dev/loop1 /dev/loop2
mdadm: /dev/md2 assembled from 1 drive and 2 spares - not enough to
start the array.
root@backup-server:/media# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md2 : inactive loop1[0](S) loop2[4](S) loop0[3](S)
4390443648 blocks
unused devices:
root@backup-server:/media# mdadm -R /dev/md2
mdadm: failed to run array /dev/md2: Input/output error
root@backup-server:/media# mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Thu Nov 19 21:09:37 2009
Raid Level : raid5
Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
Raid Devices : 3
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Nov 14 14:12:44 2010
State : active, FAILED, Not Started
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
Events : 0.3352467
Number Major Minor RaidDevice State
0 7 1 0 active sync /dev/loop1
1 0 0 1 removed
2 0 0 2 removed
root@backup-server:/media# mdadm /dev/md2 -a /dev/loop0
mdadm: re-added /dev/loop0
root@backup-server:/media# mdadm /dev/md2 -a /dev/loop2
mdadm: re-added /dev/loop2
root@backup-server:/media# mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Thu Nov 19 21:09:37 2009
Raid Level : raid5
Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sun Nov 14 14:12:44 2010
State : active, FAILED, Not Started
Active Devices : 1
Working Devices : 3
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 64K
UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
Events : 0.3352467
Number Major Minor RaidDevice State
0 7 1 0 active sync /dev/loop1
1 0 0 1 removed
2 0 0 2 removed
3 7 0 - spare /dev/loop0
4 7 2 - spare /dev/loop2
I also tried to recreate the raid:
root@backup-server:/media# mdadm -Cv /dev/md2 -n3 -l5 /dev/loop0
/dev/loop1 /dev/loop2
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop0 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop1 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop2 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: size set to 1463479808K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
root@backup-server:/media# mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Fri Feb 4 17:05:18 2011
Raid Level : raid5
Array Size : 2926959616 (2791.37 GiB 2997.21 GB)
Used Dev Size : 1463479808 (1395.68 GiB 1498.60 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Feb 4 17:05:18 2011
State : clean, degraded
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : backup-server:2 (local to host backup-server)
UUID : c37336d0:9811f9d1:294aa588:a85a5096
Events : 0
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 0 0 2 removed
3 7 2 - spare /dev/loop2
root@backup-server:/media# mdadm /dev/md2 -r /dev/loop2
mdadm: hot removed /dev/loop2 from /dev/md2
root@backup-server:/media# mdadm /dev/md2 -a /dev/loop2
mdadm: re-added /dev/loop2
root@backup-server:/media# mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Fri Feb 4 17:05:18 2011
Raid Level : raid5
Array Size : 2926959616 (2791.37 GiB 2997.21 GB)
Used Dev Size : 1463479808 (1395.68 GiB 1498.60 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri Feb 4 17:15:25 2011
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 0% complete
Name : backup-server:2 (local to host backup-server)
UUID : c37336d0:9811f9d1:294aa588:a85a5096
Events : 6
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
3 7 2 2 spare rebuilding /dev/loop2
root@backup-server:/media# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md2 : active raid5 loop2[3] loop0[0] loop1[1]
2926959616 blocks super 1.2 level 5, 512k chunk, algorithm 2
[3/2] [UU_]
[=>...................] recovery = 5.0% (74496424/1463479808)
finish=188.7min speed=122624K/sec
unused devices:
When I to that I can't find the LVM that should be inside the raid. So I
reloaded the Backup so I'm back at the beginning.
I know that my data is more or less intact because I can find a few
files with testdisks photorec(after I rebuild the raid with the command
above).
Best regards,
Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 rebuild with only 2 spare devices
am 10.02.2011 19:53:55 von Phil Turmel
Hi Thomas,
On 02/10/2011 01:03 PM, Thomas Heilberg wrote:
> Hi!
>
> Sorry for my bad English. I'm from Austria and this is also my first "mailinglist-post".
Welcome! (Your English looks fine to me--and I've had 40+ years of practice.)
>
> I have a problem with my RAID5. The raid has only 1 active devices out of 3. The other 2 devices are detected as spare.
> This is what happens when I try to assemble the raid(I'm using loop devices because I'm working with backup files):
Working from backups is a very good plan!
> root@backup-server:/media# mdadm --assemble --force /dev/md2 /dev/loop0 /dev/loop1 /dev/loop2
> mdadm: /dev/md2 assembled from 1 drive and 2 spares - not enough to start the array.
>
> root@backup-server:/media# cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> md2 : inactive loop1[0](S) loop2[4](S) loop0[3](S)
> 4390443648 blocks
>
> unused devices:
>
> root@backup-server:/media# mdadm -R /dev/md2
> mdadm: failed to run array /dev/md2: Input/output error
>
> root@backup-server:/media# mdadm -D /dev/md2
> /dev/md2:
> Version : 0.90
> Creation Time : Thu Nov 19 21:09:37 2009
> Raid Level : raid5
> Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
> Raid Devices : 3
> Total Devices : 1
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Sun Nov 14 14:12:44 2010
> State : active, FAILED, Not Started
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
> Events : 0.3352467
>
> Number Major Minor RaidDevice State
> 0 7 1 0 active sync /dev/loop1
> 1 0 0 1 removed
> 2 0 0 2 removed
Hmmm. Not enough info here, and further steps destroy it. Good thing you started over.
Please show "mdadm -E /dev/loop[0-2]" on fresh loop copies *before* trying any "create" or "add" operations.
> root@backup-server:/media# mdadm /dev/md2 -a /dev/loop0
> mdadm: re-added /dev/loop0
> root@backup-server:/media# mdadm /dev/md2 -a /dev/loop2
> mdadm: re-added /dev/loop2
> root@backup-server:/media# mdadm -D /dev/md2
> /dev/md2:
> Version : 0.90
> Creation Time : Thu Nov 19 21:09:37 2009
> Raid Level : raid5
> Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
> Raid Devices : 3
> Total Devices : 3
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Sun Nov 14 14:12:44 2010
> State : active, FAILED, Not Started
> Active Devices : 1
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 2
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
> Events : 0.3352467
>
> Number Major Minor RaidDevice State
> 0 7 1 0 active sync /dev/loop1
> 1 0 0 1 removed
> 2 0 0 2 removed
>
> 3 7 0 - spare /dev/loop0
> 4 7 2 - spare /dev/loop2
>
> I also tried to recreate the raid:
>
> root@backup-server:/media# mdadm -Cv /dev/md2 -n3 -l5 /dev/loop0 /dev/loop1 /dev/loop2
> mdadm: layout defaults to left-symmetric
> mdadm: chunk size defaults to 512K
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: /dev/loop0 appears to be part of a raid array:
> level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
> mdadm: layout defaults to left-symmetric
> mdadm: /dev/loop1 appears to be part of a raid array:
> level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
> mdadm: layout defaults to left-symmetric
> mdadm: /dev/loop2 appears to be part of a raid array:
> level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
> mdadm: size set to 1463479808K
> Continue creating array? y
> mdadm: Defaulting to version 1.2 metadata
> mdadm: array /dev/md2 started.
Yeah, mdadm was trying to tell you not to do that. "--assume-clean" is really important when trying to recreate an array with existing data.
[trim /]
If the problem is just the event counts, "mdadm --assemble --force" is probably what you want, followed by "mdadm --readonly". If pvscan shows your LVM subsystem at that point, try an fsck to see how much trouble you are in.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 rebuild with only 2 spare devices
am 10.02.2011 20:40:48 von Thomas Heilberg
Hi,
thanks for the quick answer.
2011/2/10 Phil Turmel :
> Working from backups is a very good plan!
Well, not having a backup has brought me into this problem so I
learned form my mistake.
> Hmmm. =A0Not enough info here, and further steps destroy it. =A0Good =
thing you started over.
>
> Please show "mdadm -E /dev/loop[0-2]" on fresh loop copies *before* t=
rying any "create" or "add" operations.
Unfortunately I'm not that familiar the -E option so I don't really
understand what all that means. But I think it's interesting that
there are sometimes 4 devices although the raid only has 3 in reality.
root@backup-server:/media# mdadm -E /dev/loop[0-2]
/dev/loop0:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 0.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
=A0Creation Time : Thu Nov 19 21:09:37 2009
=A0 =A0 Raid Level : raid5
=A0Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
=A0 =A0 Array Size : 2926962432 (2791.37 GiB 2997.21 GB)
=A0 Raid Devices : 3
=A0Total Devices : 2
Preferred Minor : 2
=A0 =A0Update Time : Sun Nov 14 02:08:16 2010
=A0 =A0 =A0 =A0 =A0State : active
=A0Active Devices : 1
Working Devices : 2
=A0Failed Devices : 2
=A0Spare Devices : 1
=A0 =A0 =A0 Checksum : aa2d9609 - correct
=A0 =A0 =A0 =A0 Events : 3352465
=A0 =A0 =A0 =A0 Layout : left-symmetric
=A0 =A0 Chunk Size : 64K
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0 35 =A0 =A0 =A0 =A03 =A0 =A0 =A0=
spare
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 19 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync
=A0 1 =A0 =A0 1 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A01 =A0 =A0=
=A0faulty removed
=A0 2 =A0 =A0 2 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A02 =A0 =A0=
=A0faulty removed
=A0 3 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0 35 =A0 =A0 =A0 =A03 =A0 =A0 =A0=
spare
/dev/loop1:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 0.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
=A0Creation Time : Thu Nov 19 21:09:37 2009
=A0 =A0 Raid Level : raid5
=A0Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
=A0 =A0 Array Size : 2926962432 (2791.37 GiB 2997.21 GB)
=A0 Raid Devices : 3
=A0Total Devices : 1
Preferred Minor : 2
=A0 =A0Update Time : Sun Nov 14 14:12:44 2010
=A0 =A0 =A0 =A0 =A0State : clean
=A0Active Devices : 1
Working Devices : 1
=A0Failed Devices : 2
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : aa2e3f94 - correct
=A0 =A0 =A0 =A0 Events : 3352467
=A0 =A0 =A0 =A0 Layout : left-symmetric
=A0 =A0 Chunk Size : 64K
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 19 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 19 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync
=A0 1 =A0 =A0 1 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A01 =A0 =A0=
=A0faulty removed
=A0 2 =A0 =A0 2 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A02 =A0 =A0=
=A0faulty removed
/dev/loop2:
=A0 =A0 =A0 =A0 =A0Magic : a92b4efc
=A0 =A0 =A0 =A0Version : 0.90.00
=A0 =A0 =A0 =A0 =A0 UUID : 9665c475:31f17aa2:83a3570a:c5b3b84e
=A0Creation Time : Thu Nov 19 21:09:37 2009
=A0 =A0 Raid Level : raid5
=A0Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
=A0 =A0 Array Size : 2926962432 (2791.37 GiB 2997.21 GB)
=A0 Raid Devices : 3
=A0Total Devices : 1
Preferred Minor : 2
=A0 =A0Update Time : Sun Nov 14 01:41:40 2010
=A0 =A0 =A0 =A0 =A0State : active
=A0Active Devices : 1
Working Devices : 1
=A0Failed Devices : 2
=A0Spare Devices : 0
=A0 =A0 =A0 Checksum : aa2d8f80 - correct
=A0 =A0 =A0 =A0 Events : 3352459
=A0 =A0 =A0 =A0 Layout : left-symmetric
=A0 =A0 Chunk Size : 64K
=A0 =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
this =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0 =A03 =A0 =A0 =A0 -1 =A0 =A0 =A0=
spare
=A0 0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 19 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync
=A0 1 =A0 =A0 1 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A01 =A0 =A0=
=A0faulty removed
=A0 2 =A0 =A0 2 =A0 =A0 =A0 0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A02 =A0 =A0=
=A0faulty removed
> Yeah, mdadm was trying to tell you not to do that. =A0"--assume-clean=
" is really important when trying to recreate an array with existing da=
ta.
>
> [trim /]
>
> If the problem is just the event counts, "mdadm --assemble --force" i=
s probably what you want, followed by "mdadm --readonly". =A0If pvscan =
shows your LVM subsystem at that point, try an fsck to see how much tro=
uble you are in.
>
> Phil
>
I will try that at some point but not right now because the restore
from my backups takes about 15 hours.
Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 rebuild with only 2 spare devices
am 10.02.2011 21:07:34 von John Robinson
On 10/02/2011 18:03, Thomas Heilberg wrote:
[...]
> root@backup-server:/media# mdadm -D /dev/md2
> /dev/md2:
> Version : 0.90
[...]
> Chunk Size : 64K
[...]
> I also tried to recreate the raid:
>
> root@backup-server:/media# mdadm -Cv /dev/md2 -n3 -l5 /dev/loop0
> /dev/loop1 /dev/loop2
> mdadm: layout defaults to left-symmetric
> mdadm: chunk size defaults to 512K
[...]
> mdadm: Defaulting to version 1.2 metadata
Those loop devices are now trashed since you didn't re-create the array
with exactly the parameters with which it was initially created. Your
settings make me think the array was created with an older version of
mdadm; the defaults for metadata version and chunk size changed a little
while ago. Anyway, if you're trying again, you should specify -e 0.90 -c
64. While you're at it, add --assume-clean to avoid any rebuild, which
in your case may in fact destroy good data (though the array's parity
would end up consistent). Or if as you noted in your other reply you're
going to have to wait 15 hours before trying anything, maybe wait until
The Boss[1] makes a more intelligent suggestion than I can; he usually
posts at times that appear to be overnight to me but are presumably
sensible times of day for him.
Cheers,
John.
[1] Neil Brown, who lives in Sydney where it's revoltingly early in the
morning.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 rebuild with only 2 spare devices
am 12.02.2011 19:30:57 von Thomas Heilberg
2011/2/10 John Robinson :
> Those loop devices are now trashed since you didn't re-create the array with
> exactly the parameters with which it was initially created. Your settings
> make me think the array was created with an older version of mdadm; the
> defaults for metadata version and chunk size changed a little while ago.
> Anyway, if you're trying again, you should specify -e 0.90 -c 64. While
> you're at it, add --assume-clean to avoid any rebuild, which in your case
> may in fact destroy good data (though the array's parity would end up
> consistent). Or if as you noted in your other reply you're going to have to
> wait 15 hours before trying anything, maybe wait until The Boss[1] makes a
> more intelligent suggestion than I can; he usually posts at times that
> appear to be overnight to me but are presumably sensible times of day for
> him.
It worked! Although I am not quite sure why.
This is what I did:
root@backup-server:/media# mdadm -Cv /dev/md2 -e 0.90 -c 64
--assume-clean -n3 -l5 /dev/loop[012]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop0 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop1 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: layout defaults to left-symmetric
mdadm: /dev/loop2 appears to be part of a raid array:
level=raid5 devices=3 ctime=Thu Nov 19 21:09:37 2009
mdadm: size set to 1463481216K
Continue creating array? y
mdadm: array /dev/md2 started.
root@backup-server:/media# mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Sat Feb 12 18:25:55 2011
Raid Level : raid5
Array Size : 2926962432 (2791.37 GiB 2997.21 GB)
Used Dev Size : 1463481216 (1395.68 GiB 1498.60 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Sat Feb 12 18:25:55 2011
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : b5a7fcfb:b98b8cb8:41761e78:ef14cd93 (local to host
backup-server)
Events : 0.1
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
2 7 2 2 active sync /dev/loop2
root@backup-server:/media# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md2 : active (auto-read-only) raid5 loop2[2] loop1[1] loop0[0]
2926962432 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices:
After that pvscan found my LVM:
root@backup-server:/media# pvscan
PV /dev/md2 VG server lvm2 [2,73 TiB / 86,37 GiB free]
root@backup-server:/media# vgscan
Reading all physical volumes. This may take a while...
Found volume group "server" using metadata type lvm2
root@backup-server:/media# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
daten server -wi--- 2,58t
gentoo server -wi--- 20,00g
home server -wi--- 20,00g
root server -wi--- 25,00g
root@backup-server:/media# vgchange -ay
4 logical volume(s) in volume group "server" now active
Then of course I checked all filesystems with e2fsck and it appears
that all my data is ok. I'm so happy thank you both for the help. :)
But there is one thing I don't understand. I did that recreating act 2
times. The first time it didn't work and I had to restore the
raid-partitions from my backup(this time onto a btrfs partition in the
hope I could use its snapshot feature) and then it worked. Is it
possible that the order of the devices is important for the recreate
process?
I mean mdadm -C... /dev/loop1 /dev/loop0 /dev/loop2 instead of the
normal order? Because I did that by mistake(or to be more precise I
"mounted" the second image into loop0)
This would be important to me because, then I could just directly
recreate my raid and wouldn't need to copy 2,2TB over LAN.
Anyway thank you again for the help.
Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 rebuild with only 2 spare devices
am 12.02.2011 19:48:54 von Phil Turmel
On 02/12/2011 01:30 PM, Thomas Heilberg wrote:
> 2011/2/10 John Robinson :
>
>> Those loop devices are now trashed since you didn't re-create the array with
>> exactly the parameters with which it was initially created. Your settings
>> make me think the array was created with an older version of mdadm; the
>> defaults for metadata version and chunk size changed a little while ago.
>> Anyway, if you're trying again, you should specify -e 0.90 -c 64. While
>> you're at it, add --assume-clean to avoid any rebuild, which in your case
>> may in fact destroy good data (though the array's parity would end up
>> consistent). Or if as you noted in your other reply you're going to have to
>> wait 15 hours before trying anything, maybe wait until The Boss[1] makes a
>> more intelligent suggestion than I can; he usually posts at times that
>> appear to be overnight to me but are presumably sensible times of day for
>> him.
>
> It worked! Although I am not quite sure why.
Wonderful!
[trim /]
> But there is one thing I don't understand. I did that recreating act 2
> times. The first time it didn't work and I had to restore the
> raid-partitions from my backup(this time onto a btrfs partition in the
> hope I could use its snapshot feature) and then it worked. Is it
> possible that the order of the devices is important for the recreate
> process?
> I mean mdadm -C... /dev/loop1 /dev/loop0 /dev/loop2 instead of the
> normal order? Because I did that by mistake(or to be more precise I
> "mounted" the second image into loop0)
Yes, the order of devices matters for "create". With the original images, "mdadm -E" on each should report which "slot" or raid device number they expect to be. That's used in normal assembly to keep everything in order (the kernel doesn't guarantee consistent block device names or load order).
So, if you are recreating an array (with --assume-clean), you need to specify them in the same order that "mdadm -E" sees.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html