Bookmarks

Yahoo Gmail Google Facebook Delicious Twitter Reddit Stumpleupon Myspace Digg

Search queries

/proc/kallsyms format, sqldatasource dal, wwwxxxenden, convert raid5 to raid 10 mdadm, apache force chunked, nrao wwwxxx, xxxxxdup, procmail change subject header, wwwXxx not20, Wwwxxx.doks sas

Links

XODOX
Impressum

#1: mdadm: Invalid Argument ("cannot start dirty degraded array")

Posted on 2004-11-09 17:15:57 by David Wuertele

I have a gentoo system (kernel 2.6.8-gentoo-r3) with a 7 drive RAID5
array. Recently that array went down, and I was advised by the list
to try mdadm. I was unsuccessful, but perhaps someone here can advise
me where I went wrong.

When I boot, I see the "Starting up RAID devices: ... * Trying
md0... [ !!FAILED ]" and the system drops me to the shell. I type:

# cat /proc/mdstat
Personalities : [raid1] [raid5]
md0 : inactive hdm4[0] hde2[6] hdo2[5] hdh2[4] hdf2[3] hdg2[2]
1464789888 blocks
unused devices: <none>

OK, the array is missing partition hdp2. dmesg says it has an invalid
superblock:

# dmesg | grep hdp
ide7: BM-DMA at 0xd808-0xd80f, BIOS settings: hdo:DMA, hdp:DMA
hdp: WDC WD2500JB-00GVA0, ATA DISK drive
hdp: max request size: 1024KiB
hdp: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA (100)
md: invalid raid superblock magic on hdp2
md: hdp2 has invalid sb, not importing!
Adding 64220k swap on /dev/hdp1. Priority:-2 extents:1

I didn't see any indication that there is anything wrong with the hdp
drive. Here is my /etc/mdadm.conf file:

# cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=7 UUID=d312c423:e2eeeff5:3401806f:ab10e3c
devices=/dev/ide/host2/bus0/target0/lun0/part2,/dev/ide/host 2/bus0/target1/lun0/part2,/dev/ide/host2/bus1/target0/lun0/p art2,/dev/ide/host2/bus1/target1/lun0/part2,/dev/ide/host6/b us0/target0/lun0/part4,/dev/ide/host6/bus1/target0/lun0/part 2

Since /proc/mdstat reports that six of the seven drives are already
assembled, I tried running as-is:

# mdadm --run /dev/md0
mdadm: failed to run array /dev/md0: Invalid argument
# mdadm -v --run --force /dev/md0
mdadm: failed to run array /dev/md0: Invalid argument

Hmm... not very descriptive. I looked at the end of dmesg again for
more hints:

# dmesg | tail -18
md: pers->run() failed ...
raid5: device hdm4 operational as raid disk 0
raid5: device hde2 operational as raid disk 6
raid5: device hdo2 operational as raid disk 5
raid5: device hdh2 operational as raid disk 4
raid5: device hdf2 operational as raid disk 3
raid5: device hdg2 operational as raid disk 2
raid5: cannot start dirty degraded array for md0
RAID5 conf printout:
--- rd:7 wd:6 fd:1
disk 0, o:1, dev:hdm4
disk 2, o:1, dev:hdg2
disk 3, o:1, dev:hdf2
disk 4, o:1, dev:hdh2
disk 5, o:1, dev:hdo2
disk 6, o:1, dev:hde2
raid5: failed to run raid set md0
md: pers->run() failed ...

Any suggestions?
Thanks,
Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#2: RE: mdadm: Invalid Argument ("cannot start dirty degraded array")

Posted on 2004-11-09 17:31:57 by bugzilla

Someone had a similar problem a few days ago.

Try stopping the array, then starting it.
mdadm -S /dev/md0
mdadm -A /dev/md0 --scan

Also, test the failed disk with this command:
dd if=/dev/hdp of=/dev/null bs=1024k

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of David Wuertele
Sent: Tuesday, November 09, 2004 11:16 AM
To: linux-raid@vger.kernel.org
Subject: mdadm: Invalid Argument ("cannot start dirty degraded array")

I have a gentoo system (kernel 2.6.8-gentoo-r3) with a 7 drive RAID5
array. Recently that array went down, and I was advised by the list
to try mdadm. I was unsuccessful, but perhaps someone here can advise
me where I went wrong.

When I boot, I see the "Starting up RAID devices: ... * Trying
md0... [ !!FAILED ]" and the system drops me to the shell. I type:

# cat /proc/mdstat
Personalities : [raid1] [raid5]
md0 : inactive hdm4[0] hde2[6] hdo2[5] hdh2[4] hdf2[3] hdg2[2]
1464789888 blocks
unused devices: <none>

OK, the array is missing partition hdp2. dmesg says it has an invalid
superblock:

# dmesg | grep hdp
ide7: BM-DMA at 0xd808-0xd80f, BIOS settings: hdo:DMA, hdp:DMA
hdp: WDC WD2500JB-00GVA0, ATA DISK drive
hdp: max request size: 1024KiB
hdp: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA
(100)
md: invalid raid superblock magic on hdp2
md: hdp2 has invalid sb, not importing!
Adding 64220k swap on /dev/hdp1. Priority:-2 extents:1

I didn't see any indication that there is anything wrong with the hdp
drive. Here is my /etc/mdadm.conf file:

# cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=7
UUID=d312c423:e2eeeff5:3401806f:ab10e3c

devices=/dev/ide/host2/bus0/target0/lun0/part2,/dev/ide/host 2/bus0/target1/l
un0/part2,/dev/ide/host2/bus1/target0/lun0/part2,/dev/ide/ho st2/bus1/target1
/lun0/part2,/dev/ide/host6/bus0/target0/lun0/part4,/dev/ide/ host6/bus1/targe
t0/lun0/part2

Since /proc/mdstat reports that six of the seven drives are already
assembled, I tried running as-is:

# mdadm --run /dev/md0
mdadm: failed to run array /dev/md0: Invalid argument
# mdadm -v --run --force /dev/md0
mdadm: failed to run array /dev/md0: Invalid argument

Hmm... not very descriptive. I looked at the end of dmesg again for
more hints:

# dmesg | tail -18
md: pers->run() failed ...
raid5: device hdm4 operational as raid disk 0
raid5: device hde2 operational as raid disk 6
raid5: device hdo2 operational as raid disk 5
raid5: device hdh2 operational as raid disk 4
raid5: device hdf2 operational as raid disk 3
raid5: device hdg2 operational as raid disk 2
raid5: cannot start dirty degraded array for md0
RAID5 conf printout:
--- rd:7 wd:6 fd:1
disk 0, o:1, dev:hdm4
disk 2, o:1, dev:hdg2
disk 3, o:1, dev:hdf2
disk 4, o:1, dev:hdh2
disk 5, o:1, dev:hdo2
disk 6, o:1, dev:hde2
raid5: failed to run raid set md0
md: pers->run() failed ...

Any suggestions?
Thanks,
Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#3: Re: mdadm: Invalid Argument ("cannot start dirty degraded array")

Posted on 2004-11-09 22:41:18 by Mark Thompson

David Wuertele wrote:
> I have a gentoo system (kernel 2.6.8-gentoo-r3) with a 7 drive RAID5
> array. Recently that array went down, and I was advised by the list
> to try mdadm. I was unsuccessful, but perhaps someone here can advise
> me where I went wrong.
>
> When I boot, I see the "Starting up RAID devices: ... * Trying
> md0... [ !!FAILED ]" and the system drops me to the shell. I type:
>
> # cat /proc/mdstat
> Personalities : [raid1] [raid5]
> md0 : inactive hdm4[0] hde2[6] hdo2[5] hdh2[4] hdf2[3] hdg2[2]
> 1464789888 blocks
> unused devices: <none>
>
> OK, the array is missing partition hdp2. dmesg says it has an invalid
> superblock:
>
> # dmesg | grep hdp
> ide7: BM-DMA at 0xd808-0xd80f, BIOS settings: hdo:DMA, hdp:DMA
> hdp: WDC WD2500JB-00GVA0, ATA DISK drive
> hdp: max request size: 1024KiB
> hdp: 488397168 sectors (250059 MB) w/8192KiB Cache, CHS=30401/255/63, UDMA (100)
> md: invalid raid superblock magic on hdp2
> md: hdp2 has invalid sb, not importing!
> Adding 64220k swap on /dev/hdp1. Priority:-2 extents:1
>
> I didn't see any indication that there is anything wrong with the hdp
> drive. Here is my /etc/mdadm.conf file:
>
> # cat /etc/mdadm.conf
> DEVICE partitions
> ARRAY /dev/md0 level=raid5 num-devices=7 UUID=d312c423:e2eeeff5:3401806f:ab10e3c
> devices=/dev/ide/host2/bus0/target0/lun0/part2,/dev/ide/host 2/bus0/target1/lun0/part2,/dev/ide/host2/bus1/target0/lun0/p art2,/dev/ide/host2/bus1/target1/lun0/part2,/dev/ide/host6/b us0/target0/lun0/part4,/dev/ide/host6/bus1/target0/lun0/part 2
>
> Since /proc/mdstat reports that six of the seven drives are already
> assembled, I tried running as-is:
>
> # mdadm --run /dev/md0
> mdadm: failed to run array /dev/md0: Invalid argument
> # mdadm -v --run --force /dev/md0
> mdadm: failed to run array /dev/md0: Invalid argument
>
> Hmm... not very descriptive. I looked at the end of dmesg again for
> more hints:
>
> # dmesg | tail -18
> md: pers->run() failed ...
> raid5: device hdm4 operational as raid disk 0
> raid5: device hde2 operational as raid disk 6
> raid5: device hdo2 operational as raid disk 5
> raid5: device hdh2 operational as raid disk 4
> raid5: device hdf2 operational as raid disk 3
> raid5: device hdg2 operational as raid disk 2
> raid5: cannot start dirty degraded array for md0
> RAID5 conf printout:
> --- rd:7 wd:6 fd:1
> disk 0, o:1, dev:hdm4
> disk 2, o:1, dev:hdg2
> disk 3, o:1, dev:hdf2
> disk 4, o:1, dev:hdh2
> disk 5, o:1, dev:hdo2
> disk 6, o:1, dev:hde2
> raid5: failed to run raid set md0
> md: pers->run() failed ...
>
> Any suggestions?
> Thanks,
> Dave

Hey there,

I had the exact issue on the weekend this is how I fixed it:
mdadm -S /dev/md0
mdadm -Af /dev/md0 /dev/hdm4 /dev/hde2 /dev/hdo2 /dev/hdh2 /dev/hdf2
/dev/hdg2

That -should- start the array without hdp2, once its started, add
/dev/hdp2 to the array and it should be all good.

mdadm -a /dev/md0 /dev/hdp2

Cheers,
Mark
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message

#4: Re: mdadm: Invalid Argument ("cannot start dirty degraded array")

Posted on 2004-11-10 05:42:17 by David Wuertele

Mark> mdadm -S /dev/md0

OK, that worked

Mark> mdadm -Af /dev/md0 /dev/hdm4 /dev/hde2 /dev/hdo2 /dev/hdh2 /dev/hdf2
Mark> /dev/hdg2

This gives me a "Segmentation fault" !!

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Report this message