mdadm / RAID, a few questions
mdadm / RAID, a few questions
am 29.10.2010 16:18:09 von mathias.buren
Hi,
I've a few questions in relation to mdadm and performance. System
details follow below:
Intel Atom 330 @ 1.6Ghz (dualcore, HT), 4GB RAM
05:00.0 SCSI storage controller: HighPoint Technologies, Inc.
RocketRAID 230x 4 Port SATA-II Controller (rev 02)
00:0b.0 SATA controller: nVidia Corporation MCP79 AHCI Controller (rev b1)
/dev/sda:
Model=Corsair CSSD-F60GB2, FwRev=1.1, SerialNo=10326505580009990027
/dev/sdb:
Model=WDC WD20EARS-00MVWB0, FwRev=51.0AB51, SerialNo=WD-WCAZA1022443
/dev/sdc:
Model=WDC WD20EARS-00MVWB0, FwRev=50.0AB50, SerialNo=WD-WMAZ20152590
/dev/sdd:
Model=WDC WD20EARS-00MVWB0, FwRev=50.0AB50, SerialNo=WD-WMAZ20188479
/dev/sde:
Model=SAMSUNG HD204UI, FwRev=1AQ10001, SerialNo=S2HGJ1RZ800964
/dev/sdf:
Model=WDC WD20EARS-00MVWB0, FwRev=51.0AB51, SerialNo=WD-WCAZA1000331
/dev/sdg:
Model=SAMSUNG HD204UI, FwRev=1AQ10001, SerialNo=S2HGJ1RZ800850
mdadm -D /dev/md0:
/dev/md0:
Version : 1.2
Creation Time : Tue Oct 19 08:58:41 2010
Raid Level : raid5
Array Size : 5851054080 (5580.00 GiB 5991.48 GB)
Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri Oct 29 15:42:42 2010
State : active, recovering
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Reshape Status : 57% complete
Delta Devices : 2, (4->6)
Name : ion:0 (local to host ion)
UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
Events : 35293
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
1 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
5 8 65 4 active sync /dev/sde1
6 8 97 5 active sync /dev/sdg1
Question 1: I saw that in Linux 2.6.36 (perhaps earlier versions as
well) you have the kernel config option CONFIG_MULTICORE_RAID456. I
tried enabling it, booted to 2.6.36 from 2.6.35, and rebuilding of the
array continued where it left off before reboot. However, the
performance was abysmal.. around 16MB/s compared to 70MB/s without the
option turned on. Is this a bug, or is it because the Atom has no
grunt to speak of?
Question 2: The array is now recovering since I've grown it to 6 from 4 devices:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[0] sde1[5] sdg1[6] sdc1[3] sdd1[4] sdb1[1]
5851054080 blocks super 1.2 level 5, 128k chunk, algorithm 2
[6/6] [UUUUUU]
[===========>.........] reshape = 57.7% (1126387328/1950351360)
finish=718.0min speed=19125K/sec
unused devices:
Is there a way to speed it up? /proc/sys/dev/raid/speed_limit_min is
100000 (100k), /proc/sys/dev/raid/speed_limit_max is 1000000 (1000k).
Question 3: Before I created this RAID5 array I did a quick RAID0 test
array just for fun, using 2 full devices (not partitions). Now I have
this:
mdadm --examine --verbose --scan
ARRAY /dev/md/raid0-test level=raid0 metadata=1.2 num-devices=2
UUID=b84cc081:1ae27b49:d5ae466c:377ba300 name=ion:raid0-test
devices=/dev/sdf,/dev/sdb
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=6
UUID=e6595c64:b3ae90b3:f01133ac:3f402d20 name=ion:0
devices=/dev/sdg1,/dev/sdf1,/dev/sde1,/dev/sdd1,/dev/sdc1,/d ev/sdb1
Is it safe to erase the raid0-test superblocks on device /dev/sdb and
/dev/sdc or will it interfere with my RAID5 array (which is lying on
top of partitions) ?
Many thanks,
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm / RAID, a few questions
am 29.10.2010 22:22:13 von John Robinson
On 29/10/2010 15:18, Mathias Burén wrote:
> Hi,
>
> I've a few questions in relation to mdadm and performance. System
> details follow below:
>
> Intel Atom 330 @ 1.6Ghz (dualcore, HT), 4GB RAM
[...]
> Question 1: I saw that in Linux 2.6.36 (perhaps earlier versions as
> well) you have the kernel config option CONFIG_MULTICORE_RAID456. I
> tried enabling it, booted to 2.6.36 from 2.6.35, and rebuilding of th=
e
> array continued where it left off before reboot. However, the
> performance was abysmal.. around 16MB/s compared to 70MB/s without th=
e
> option turned on. Is this a bug, or is it because the Atom has no
> grunt to speak of?
No, the performance of MULTICORE_RAID456 is abysmal on any CPU. It's an=
=20
experimental implementation that doesn't work terribly well. If you're=20
interested in developing, by all means help, but if not, turn it off.
> Question 2: The array is now recovering since I've grown it to 6 from=
4 devices:
> $ cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdf1[0] sde1[5] sdg1[6] sdc1[3] sdd1[4] sdb1[1]
> 5851054080 blocks super 1.2 level 5, 128k chunk, algorithm 2
> [6/6] [UUUUUU]
> [===========3D>.........] reshape =3D 57.=
7% (1126387328/1950351360)
> finish=3D718.0min speed=3D19125K/sec
>
> unused devices:
>
> Is there a way to speed it up? /proc/sys/dev/raid/speed_limit_min is
> 100000 (100k), /proc/sys/dev/raid/speed_limit_max is 1000000 (1000k).
No, that's probably about right on something as weak as an Atom. Let it=
run.
> Question 3: Before I created this RAID5 array I did a quick RAID0 tes=
t
> array just for fun, using 2 full devices (not partitions). Now I have
> this:
[...]
Sorry, I don't know the answer to this. I suspect it's to do with=20
superblock versions, but I don't know - I'm sorry that's not helpful.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm / RAID, a few questions
am 29.10.2010 22:35:16 von mathias.buren
On 29 October 2010 21:22, John Robinson
> wrote:
> On 29/10/2010 15:18, Mathias Burén wrote:
>>
>> Hi,
>>
>> I've a few questions in relation to mdadm and performance. System
>> details follow below:
>>
>> Intel Atom 330 @ 1.6Ghz (dualcore, HT), 4GB RAM
>
> [...]
>>
>> Question 1: I saw that in Linux 2.6.36 (perhaps earlier versions as
>> well) you have the kernel config option CONFIG_MULTICORE_RAID456. I
>> tried enabling it, booted to 2.6.36 from 2.6.35, and rebuilding of t=
he
>> array continued where it left off before reboot. However, the
>> performance was abysmal.. around 16MB/s compared to 70MB/s without t=
he
>> option turned on. Is this a bug, or is it because the Atom has no
>> grunt to speak of?
>
> No, the performance of MULTICORE_RAID456 is abysmal on any CPU. It's =
an
> experimental implementation that doesn't work terribly well. If you'r=
e
> interested in developing, by all means help, but if not, turn it off.
>
>> Question 2: The array is now recovering since I've grown it to 6 fro=
m 4
>> devices:
>> $ cat /proc/mdstat
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid5 sdf1[0] sde1[5] sdg1[6] sdc1[3] sdd1[4] sdb1[1]
>> Â Â Â 5851054080 blocks super 1.2 level 5, 128k chunk=
, algorithm 2
>> [6/6] [UUUUUU]
>> Â Â Â [===========3D>.........] =C2=
=A0reshape =3D 57.7% (1126387328/1950351360)
>> finish=3D718.0min speed=3D19125K/sec
>>
>> unused devices:
>>
>> Is there a way to speed it up? /proc/sys/dev/raid/speed_limit_min is
>> 100000 (100k), /proc/sys/dev/raid/speed_limit_max is 1000000 (1000k)=
>
> No, that's probably about right on something as weak as an Atom. Let =
it run.
>
>> Question 3: Before I created this RAID5 array I did a quick RAID0 te=
st
>> array just for fun, using 2 full devices (not partitions). Now I hav=
e
>> this:
>
> [...]
>
> Sorry, I don't know the answer to this. I suspect it's to do with sup=
erblock
> versions, but I don't know - I'm sorry that's not helpful.
>
> Cheers,
>
> John.
>
>
Hi,
Thanks for the answers. I got the speed up to around 36 MBps by
disabling NCQ and changing some cache values, so there's only 36
minutes left or so now. I would love to help develop the
multicore_raid456, but I'm not a coder. Patches welcome though!
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm / RAID, a few questions
am 29.10.2010 22:44:08 von NeilBrown
On Fri, 29 Oct 2010 15:18:09 +0100
Mathias Burén wrote:
> Question 3: Before I created this RAID5 array I did a quick RAID0 tes=
t
> array just for fun, using 2 full devices (not partitions). Now I have
> this:
>=20
> mdadm --examine --verbose --scan
> ARRAY /dev/md/raid0-test level=3Draid0 metadata=3D1.2 num-devices=3D2
> UUID=3Db84cc081:1ae27b49:d5ae466c:377ba300 name=3Dion:raid0-test
> devices=3D/dev/sdf,/dev/sdb
> ARRAY /dev/md/0 level=3Draid5 metadata=3D1.2 num-devices=3D6
> UUID=3De6595c64:b3ae90b3:f01133ac:3f402d20 name=3Dion:0
> devices=3D/dev/sdg1,/dev/sdf1,/dev/sde1,/dev/sdd1,/dev/sdc1, /dev/s=
db1
>=20
> Is it safe to erase the raid0-test superblocks on device /dev/sdb and
> /dev/sdc or will it interfere with my RAID5 array (which is lying on
> top of partitions) ?
It should be safe to
mdadm --zero-superblock /dev/sdf /dev/sdb
As you have 1.2 metadata, that info will be 4K from the start of the de=
vice.
Depending on how you partitioned the devices, that is either in dead sp=
ace
between the partition table and the first partition, or it is in dead s=
pace
in the first partition just before the md metadata.
So the old metadata is still visible, creating the new arrays clearly d=
idn't
over-write it, so they don't really care what is there...
That statement isn't 100% general. A block in the data area of the new =
array
could be unchanged by creating an array, yet changing it could still co=
rrupt
parity. However you can be certain that the metadata for a whole-devic=
e
array does not lie in the data area for a partitioned array of the same
metadata type.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: mdadm / RAID, a few questions
am 04.11.2010 21:02:10 von mathias.buren
Hi,
Thank you - it looks like it worked.
[root@ion ~]# mdadm --examine --verbose --scan
ARRAY /dev/md/0 level=3Draid5 metadata=3D1.2 num-devices=3D6
UUID=3De6595c64:b3ae90b3:f01133ac:3f402d20 name=3Dion:0
devices=3D/dev/sdg1,/dev/sdf1,/dev/sde1,/dev/sdd1,/dev/sdc1, /dev/sdb=
1
Will run an fsck just in case.
Cheers,
// Mathias
On 29 October 2010 21:44, Neil Brown wrote:
> On Fri, 29 Oct 2010 15:18:09 +0100
> Mathias Burén wrote:
>
>> Question 3: Before I created this RAID5 array I did a quick RAID0 te=
st
>> array just for fun, using 2 full devices (not partitions). Now I hav=
e
>> this:
>>
>> mdadm  --examine --verbose --scan
>> ARRAY /dev/md/raid0-test level=3Draid0 metadata=3D1.2 num-devices=3D=
2
>> UUID=3Db84cc081:1ae27b49:d5ae466c:377ba300 name=3Dion:raid0-test
>> Â Â devices=3D/dev/sdf,/dev/sdb
>> ARRAY /dev/md/0 level=3Draid5 metadata=3D1.2 num-devices=3D6
>> UUID=3De6595c64:b3ae90b3:f01133ac:3f402d20 name=3Dion:0
>> Â Â devices=3D/dev/sdg1,/dev/sdf1,/dev/sde1,/dev/sdd1,/dev /=
sdc1,/dev/sdb1
>>
>> Is it safe to erase the raid0-test superblocks on device /dev/sdb an=
d
>> /dev/sdc or will it interfere with my RAID5 array (which is lying on
>> top of partitions) ?
>
> It should be safe to
> Â mdadm --zero-superblock /dev/sdf /dev/sdb
>
> As you have 1.2 metadata, that info will be 4K from the start of the =
device.
> Depending on how you partitioned the devices, that is either in dead =
space
> between the partition table and the first partition, or it is in dead=
space
> in the first partition just before the md metadata.
>
> So the old metadata is still visible, creating the new arrays clearly=
didn't
> over-write it, so they don't really care what is there...
>
> That statement isn't 100% general. A block in the data area of the ne=
w array
> could be unchanged by creating an array, yet changing it could still =
corrupt
> parity. Â However you can be certain that the metadata for a whol=
e-device
> array does not lie in the data area for a partitioned array of the sa=
me
> metadata type.
>
> NeilBrown
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html