Raid 5 Array
am 02.04.2011 20:51:58 von marcus
I have a raid array this is the second time an upgrade seems to have
corrupted the array.
I get the following message from dmesg when trying to mount the array
[ 372.822199] RAID5 conf printout:
[ 372.822202] --- rd:3 wd:3
[ 372.822208] disk 0, o:1, dev:md0
[ 372.822212] disk 1, o:1, dev:sdb1
[ 372.822216] disk 2, o:1, dev:sdc1
[ 372.822305] md2: detected capacity change from 0 to 1000210300928
[ 372.823206] md2: p1
[ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
optional features (3d1fc20)
[ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
optional features (3d1fc20)
I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
I swapped out md1 with a new 1TB drive which worked. then i dropped
the 500GB and combined it with the 250GB drive to make a 750GB drive
The error seems to come when you reintroduce drives that were
previously in a raid array into a new raid array. This is the second
time I have ended up with the same problem.
Any suggestions on how to recover from this or is my only option to
reformat everything and start again?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 02.04.2011 21:01:13 von Simon McNair
Hi,
I'm sure you've tried this, but do you use --zero-superblock before
moving disks over ?
Simon
On 02/04/2011 19:51, Marcus wrote:
> I have a raid array this is the second time an upgrade seems to have
> corrupted the array.
>
> I get the following message from dmesg when trying to mount the array
> [ 372.822199] RAID5 conf printout:
> [ 372.822202] --- rd:3 wd:3
> [ 372.822208] disk 0, o:1, dev:md0
> [ 372.822212] disk 1, o:1, dev:sdb1
> [ 372.822216] disk 2, o:1, dev:sdc1
> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
> [ 372.823206] md2: p1
> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
> optional features (3d1fc20)
> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
> optional features (3d1fc20)
>
> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>
> I swapped out md1 with a new 1TB drive which worked. then i dropped
> the 500GB and combined it with the 250GB drive to make a 750GB drive
>
> The error seems to come when you reintroduce drives that were
> previously in a raid array into a new raid array. This is the second
> time I have ended up with the same problem.
>
> Any suggestions on how to recover from this or is my only option to
> reformat everything and start again?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 02.04.2011 22:09:48 von Simon McNair
cc'd the list back in as I'm not an md guru.
I did a search for mdadm raid 50 and this looked the most appropriate.
http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg =PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9G td5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi =book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q= mdadm%20raid%2050&f=false
Simon
On 02/04/2011 20:38, Marcus wrote:
> Yes I used --zero-superblock this time. I think that was my problem
> last time it kept detecting the drives at random and creating odd
> arrays. This time I am not sure what my problem is. I got two drives
> back up so I have my data back but I tried getting the two raid0
> drives to become part of the raid5 twice so far and each time fdisk -l
> shows the wrong sizes for the raids when they are combine the first
> time it showed the small raid as 1TB which is the size of the big raid
> the second time it showed the big raid as 750GB which is the size of
> the small array. Some how the joining of the two raids is corrupting
> the headers and reporting wrong information.
>
> Is there a proper procedure for creating a raid0 to put into a raid5?
> last time I created my raid0 and added a partition to the raids and it
> automatically dropped the partition and just showed md0 and md1 in the
> array instead of md0p1 and md1p1 which was the partition i added to
> the array. I have tried adding the partition into the array and I also
> tried adding just array into the array. neither method seems to be
> working this time.
>
> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair wrote:
>> Hi,
>> I'm sure you've tried this, but do you use --zero-superblock before moving
>> disks over ?
>>
>> Simon
>>
>> On 02/04/2011 19:51, Marcus wrote:
>>> I have a raid array this is the second time an upgrade seems to have
>>> corrupted the array.
>>>
>>> I get the following message from dmesg when trying to mount the array
>>> [ 372.822199] RAID5 conf printout:
>>> [ 372.822202] --- rd:3 wd:3
>>> [ 372.822208] disk 0, o:1, dev:md0
>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>> [ 372.823206] md2: p1
>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>> optional features (3d1fc20)
>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>> optional features (3d1fc20)
>>>
>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>
>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>
>>> The error seems to come when you reintroduce drives that were
>>> previously in a raid array into a new raid array. This is the second
>>> time I have ended up with the same problem.
>>>
>>> Any suggestions on how to recover from this or is my only option to
>>> reformat everything and start again?
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 02.04.2011 23:27:37 von Simon McNair
Marcus,
Please reply all and keep the list in cc
Please also post the commands used to create the arrays and the fdisk output ?
The other thing of note, I think is that when you have multiple arrays
I believe the recommendation is to use mdadm.conf as a 'hint' file so
that this doesn't happen ?
On 2 Apr 2011, at 21:20, Marcus wrote:
> My raid is opposite of that I am putting raid0's into a raid5 rather
> than raid5's into a raid0 but from the looks of what you have sent me
> I am not suppose to add a partition to the raid that is going into the
> main raid?
>
> I guess I will play with it some more and hopefully I don't lose
> everything. I just don't like waiting 4 hours for it to rebuild the
> drive to find out it doesn't work.
>
> Thanks for the help.
>
> On Sat, Apr 2, 2011 at 1:09 PM, Simon McNair wrote:
>> cc'd the list back in as I'm not an md guru.
>>
>> I did a search for mdadm raid 50 and this looked the most appropriate.
>>
>> http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg =PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9G td5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi =book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q= mdadm%20raid%2050&f=false
>>
>> Simon
>>
>> On 02/04/2011 20:38, Marcus wrote:
>>>
>>> Yes I used --zero-superblock this time. I think that was my problem
>>> last time it kept detecting the drives at random and creating odd
>>> arrays. This time I am not sure what my problem is. I got two drives
>>> back up so I have my data back but I tried getting the two raid0
>>> drives to become part of the raid5 twice so far and each time fdisk -l
>>> shows the wrong sizes for the raids when they are combine the first
>>> time it showed the small raid as 1TB which is the size of the big raid
>>> the second time it showed the big raid as 750GB which is the size of
>>> the small array. Some how the joining of the two raids is corrupting
>>> the headers and reporting wrong information.
>>>
>>> Is there a proper procedure for creating a raid0 to put into a raid5?
>>> last time I created my raid0 and added a partition to the raids and it
>>> automatically dropped the partition and just showed md0 and md1 in the
>>> array instead of md0p1 and md1p1 which was the partition i added to
>>> the array. I have tried adding the partition into the array and I also
>>> tried adding just array into the array. neither method seems to be
>>> working this time.
>>>
>>> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair
>>> wrote:
>>>>
>>>> Hi,
>>>> I'm sure you've tried this, but do you use --zero-superblock before
>>>> moving
>>>> disks over ?
>>>>
>>>> Simon
>>>>
>>>> On 02/04/2011 19:51, Marcus wrote:
>>>>>
>>>>> I have a raid array this is the second time an upgrade seems to have
>>>>> corrupted the array.
>>>>>
>>>>> I get the following message from dmesg when trying to mount the array
>>>>> [ 372.822199] RAID5 conf printout:
>>>>> [ 372.822202] --- rd:3 wd:3
>>>>> [ 372.822208] disk 0, o:1, dev:md0
>>>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>>>> [ 372.823206] md2: p1
>>>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>>>> optional features (3d1fc20)
>>>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>>>> optional features (3d1fc20)
>>>>>
>>>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>>>
>>>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>>>
>>>>> The error seems to come when you reintroduce drives that were
>>>>> previously in a raid array into a new raid array. This is the second
>>>>> time I have ended up with the same problem.
>>>>>
>>>>> Any suggestions on how to recover from this or is my only option to
>>>>> reformat everything and start again?
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 02.04.2011 23:45:58 von Simon McNair
One last thing.... I've never heard of anyone using a raid 05. Why
wouldn't you use a RAID50 ? Please can you dish the dirt on what
benefit there is ? (I would have thought a raid50 would have been
better with no disadvantages ?). I thought that raid10 & 50 were the
main ones in use in 'the industry'.
Please forgive me if I'm showing my ignorance.
Simon
On 2 Apr 2011, at 21:09, Simon McNair wrote:
> cc'd the list back in as I'm not an md guru.
>
> I did a search for mdadm raid 50 and this looked the most appropriate.
>
> http://books.google.co.uk/books?id=DkonSDG8jUMC&pg=PT116&lpg =PT116&dq=mdadm+raid+50&source=bl&ots=Ekw6NCiXqR&sig=edBYg9G td5RXyuUU0PeSpHvS7pM&hl=en&ei=9YGXTYyeBcGFhQe90ojpCA&sa=X&oi =book_result&ct=result&resnum=5&ved=0CEIQ6AEwBA#v=onepage&q= mdadm%20raid%2050&f=false
>
> Simon
>
> On 02/04/2011 20:38, Marcus wrote:
>> Yes I used --zero-superblock this time. I think that was my problem
>> last time it kept detecting the drives at random and creating odd
>> arrays. This time I am not sure what my problem is. I got two drives
>> back up so I have my data back but I tried getting the two raid0
>> drives to become part of the raid5 twice so far and each time fdisk -l
>> shows the wrong sizes for the raids when they are combine the first
>> time it showed the small raid as 1TB which is the size of the big raid
>> the second time it showed the big raid as 750GB which is the size of
>> the small array. Some how the joining of the two raids is corrupting
>> the headers and reporting wrong information.
>>
>> Is there a proper procedure for creating a raid0 to put into a raid5?
>> last time I created my raid0 and added a partition to the raids and it
>> automatically dropped the partition and just showed md0 and md1 in the
>> array instead of md0p1 and md1p1 which was the partition i added to
>> the array. I have tried adding the partition into the array and I also
>> tried adding just array into the array. neither method seems to be
>> working this time.
>>
>> On Sat, Apr 2, 2011 at 12:01 PM, Simon McNair wrote:
>>> Hi,
>>> I'm sure you've tried this, but do you use --zero-superblock before moving
>>> disks over ?
>>>
>>> Simon
>>>
>>> On 02/04/2011 19:51, Marcus wrote:
>>>> I have a raid array this is the second time an upgrade seems to have
>>>> corrupted the array.
>>>>
>>>> I get the following message from dmesg when trying to mount the array
>>>> [ 372.822199] RAID5 conf printout:
>>>> [ 372.822202] --- rd:3 wd:3
>>>> [ 372.822208] disk 0, o:1, dev:md0
>>>> [ 372.822212] disk 1, o:1, dev:sdb1
>>>> [ 372.822216] disk 2, o:1, dev:sdc1
>>>> [ 372.822305] md2: detected capacity change from 0 to 1000210300928
>>>> [ 372.823206] md2: p1
>>>> [ 410.783871] EXT4-fs (md2): Couldn't mount because of unsupported
>>>> optional features (3d1fc20)
>>>> [ 412.401534] EXT4-fs (md2): Couldn't mount because of unsupported
>>>> optional features (3d1fc20)
>>>>
>>>> I originally had a raid0 md0 with two 160GB drives, a raid0 md1 with
>>>> 250GB and md0, a raid 5 with a 1.0TB, 500GB, and md1
>>>>
>>>> I swapped out md1 with a new 1TB drive which worked. then i dropped
>>>> the 500GB and combined it with the 250GB drive to make a 750GB drive
>>>>
>>>> The error seems to come when you reintroduce drives that were
>>>> previously in a raid array into a new raid array. This is the second
>>>> time I have ended up with the same problem.
>>>>
>>>> Any suggestions on how to recover from this or is my only option to
>>>> reformat everything and start again?
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 00:01:13 von Roman Mamedov
--Sig_/BdAMq_ttykHPkIZATP0_ZbE
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
On Sat, 2 Apr 2011 22:45:58 +0100
Simon Mcnair wrote:
> One last thing.... I've never heard of anyone using a raid 05. Why
> wouldn't you use a RAID50 ? Please can you dish the dirt on what
> benefit there is ? (I would have thought a raid50 would have been
> better with no disadvantages ?). I thought that raid10 & 50 were the
> main ones in use in 'the industry'.
RAID5/6 with some RAID0 (or JBOD) members is what you use when you want to
include differently-sized devices into the array:
http://louwrentius.com/blog/2008/08/building-a-raid-6-array- of-mixed-drives/
--=20
With respect,
Roman
--Sig_/BdAMq_ttykHPkIZATP0_ZbE
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEARECAAYFAk2XnKkACgkQTLKSvz+PZwiftgCfdk2ID0Cp9cVPvQlAMo2u vX97
YCQAnjP9CyikkdR5Iob84eZYr5DdsUEo
=SMte
-----END PGP SIGNATURE-----
--Sig_/BdAMq_ttykHPkIZATP0_ZbE--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 00:04:31 von Roberto Spadim
why use raid 5,6? raid1 isn=B4t more secure?
2011/4/2 Roman Mamedov :
> On Sat, 2 Apr 2011 22:45:58 +0100
> Simon Mcnair wrote:
>
>> One last thing.... I've never heard of anyone using a raid 05. Why
>> wouldn't you use a RAID50 ? =A0Please can you dish the dirt on what
>> benefit there is ? (I would have thought a raid50 would have been
>> better with no disadvantages ?). I thought that raid10 & 50 were the
>> main ones in use in 'the industry'.
>
> RAID5/6 with some RAID0 (or JBOD) members is what you use when you wa=
nt to
> include differently-sized devices into the array:
> http://louwrentius.com/blog/2008/08/building-a-raid-6-array- of-mixed-=
drives/
> --
> With respect,
> Roman
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 01:06:58 von marcus
I am running a raid 5 only. The raid 0 is to make a number of smaller
drives larger. Because a raid 5 takes the smallest drive and applies
it to all drives.
The original raid started off as a 26GB raid 5 with a 13GB, 40GB and a
160GB drive and I have grown it from there to its current size which
is 1TB.
I bought another 1TB drive yesterday and am trying to combine a 500GB
and a 250GB drive to make a 750GB drive so I can push the raid up
again this time to 1.5TB.
The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has been
extremely stable for the last 3 months but ran out of space.
The configuration I am trying to achieve is raid0 md0 750GB (250GB,
500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
This started out as an experiment to see if I could do a raid 5
system. It was originally built with drives I had laying around the
house. Now it is big enough that I have started buying drives for it.
I have gone through many configurations of extra drives to get it
where it is now. I have had 1 catastrophic failure since I started and
that was the last time I made it bigger. I was running on 2 drives one
of them being the md0, md1 configuration and mdadm got confused and
couldn't put md0 and md1 back together to have 2 working drives. I
probably could have corrected the problem if I knew what I know now
but as this is an experimental raid it is a learning process.
The current problem I am having is every time I try to apply the 750GB
raid drive to the raid 5 it corrupts the headers and 1 of the arrays
report the wrong size. Which causes it not to mount. The only way to
correct the problem seems to be to unplug the two drives that make up
md0 and reboot onto 2 drives then start the process again. I am
currently working on my 3rd attempt to integrate the 750GB raid drive.
Each attempt takes 4 hours to restore the drive so it has been a long
process. I haven't lost the data yet though so I guess I will keep
trying. Hopefully it won't be too corrupt when I am done.
On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim =
wrote:
> why use raid 5,6? raid1 isn=B4t more secure?
>
> 2011/4/2 Roman Mamedov :
>> On Sat, 2 Apr 2011 22:45:58 +0100
>> Simon Mcnair wrote:
>>
>>> One last thing.... I've never heard of anyone using a raid 05. Why
>>> wouldn't you use a RAID50 ? =A0Please can you dish the dirt on what
>>> benefit there is ? (I would have thought a raid50 would have been
>>> better with no disadvantages ?). I thought that raid10 & 50 were th=
e
>>> main ones in use in 'the industry'.
>>
>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you w=
ant to
>> include differently-sized devices into the array:
>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array- of-mixed=
-drives/
>> --
>> With respect,
>> Roman
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 02:22:49 von marcus
Okay it seems to work now. I destroyed md0 and recreated it and then
just added it to the md2 array without doing any partitioning like I
did all the times before. Even when I created my old array I
partitioned it but mdadm destroyed the partition automatically.
On Sat, Apr 2, 2011 at 4:06 PM, Marcus wrote:
> I am running a raid 5 only. The raid 0 is to make a number of smaller
> drives larger. Because a raid 5 takes the smallest drive and applies
> it to all drives.
>
> The original raid started off as a 26GB raid 5 with a 13GB, 40GB and =
a
> 160GB drive and I have grown it from there to its current size which
> is 1TB.
>
> I bought another 1TB drive yesterday and am trying to combine a 500GB
> and a 250GB drive to make a 750GB drive so I can push the raid up
> again this time to 1.5TB.
>
> The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
> md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has been
> extremely stable for the last 3 months but ran out of space.
>
> The configuration I am trying to achieve is raid0 md0 750GB (250GB,
> 500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
>
> This started out as an experiment to see if I could do a raid 5
> system. It was originally built with drives I had laying around the
> house. Now it is big enough that I have started buying drives for it.
> I have gone through many configurations of extra drives to get it
> where it is now. I have had 1 catastrophic failure since I started an=
d
> that was the last time I made it bigger. I was running on 2 drives on=
e
> of them being the md0, md1 configuration and mdadm got confused and
> couldn't put md0 and md1 back together to have 2 working drives. I
> probably could have corrected the problem if I knew what I know now
> but as this is an experimental raid it is a learning process.
>
> The current problem I am having is every time I try to apply the 750G=
B
> raid drive to the raid 5 it corrupts the headers and 1 of the arrays
> report the wrong size. Which causes it not to mount. The only way to
> correct the problem seems to be to unplug the two drives that make up
> md0 and reboot onto 2 drives then start the process again. I am
> currently working on my 3rd attempt to integrate the 750GB raid drive=
> Each attempt takes 4 hours to restore the drive so it has been a long
> process. I haven't lost the data yet though so I guess I will keep
> trying. Hopefully it won't be too corrupt when I am done.
>
>
>
> On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim
> wrote:
>> why use raid 5,6? raid1 isn=B4t more secure?
>>
>> 2011/4/2 Roman Mamedov :
>>> On Sat, 2 Apr 2011 22:45:58 +0100
>>> Simon Mcnair wrote:
>>>
>>>> One last thing.... I've never heard of anyone using a raid 05. Why
>>>> wouldn't you use a RAID50 ? =A0Please can you dish the dirt on wha=
t
>>>> benefit there is ? (I would have thought a raid50 would have been
>>>> better with no disadvantages ?). I thought that raid10 & 50 were t=
he
>>>> main ones in use in 'the industry'.
>>>
>>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you =
want to
>>> include differently-sized devices into the array:
>>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array- of-mixe=
d-drives/
>>> --
>>> With respect,
>>> Roman
>>>
>>
>>
>>
>> --
>> Roberto Spadim
>> Spadim Technology / SPAEmpresarial
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 08:41:50 von marcus
Okay I have my raid extended to 1500.3GB however I can't seem to grow
the partition past 1TB. It will let me create a new partition but it
won't let me make the current partition any bigger. Does anyone know
how to fix this?
On Sat, Apr 2, 2011 at 5:22 PM, Marcus wrote:
> Okay it seems to work now. I destroyed md0 and recreated it and then
> just added it to the md2 array without doing any partitioning like I
> did all the times before. Even when I created my old array I
> partitioned it but mdadm destroyed the partition automatically.
>
> On Sat, Apr 2, 2011 at 4:06 PM, Marcus wrote:
>> I am running a raid 5 only. The raid 0 is to make a number of smalle=
r
>> drives larger. Because a raid 5 takes the smallest drive and applies
>> it to all drives.
>>
>> The original raid started off as a 26GB raid 5 with a 13GB, 40GB and=
a
>> 160GB drive and I have grown it from there to its current size which
>> is 1TB.
>>
>> I bought another 1TB drive yesterday and am trying to combine a 500G=
B
>> and a 250GB drive to make a 750GB drive so I can push the raid up
>> again this time to 1.5TB.
>>
>> The last configuration was a raid0 md0 320GB (160GB, 160GB), raid 0
>> md1 570GB (md0, 250GB), raid 5 1TB (md1, 500GB, 1.0TB) which has bee=
n
>> extremely stable for the last 3 months but ran out of space.
>>
>> The configuration I am trying to achieve is raid0 md0 750GB (250GB,
>> 500GB) , raid 5 md2 1.5TB (md0, 1.0TB, 1.0TB)
>>
>> This started out as an experiment to see if I could do a raid 5
>> system. It was originally built with drives I had laying around the
>> house. Now it is big enough that I have started buying drives for it=
>> I have gone through many configurations of extra drives to get it
>> where it is now. I have had 1 catastrophic failure since I started a=
nd
>> that was the last time I made it bigger. I was running on 2 drives o=
ne
>> of them being the md0, md1 configuration and mdadm got confused and
>> couldn't put md0 and md1 back together to have 2 working drives. I
>> probably could have corrected the problem if I knew what I know now
>> but as this is an experimental raid it is a learning process.
>>
>> The current problem I am having is every time I try to apply the 750=
GB
>> raid drive to the raid 5 it corrupts the headers and 1 of the arrays
>> report the wrong size. Which causes it not to mount. The only way to
>> correct the problem seems to be to unplug the two drives that make u=
p
>> md0 and reboot onto 2 drives then start the process again. I am
>> currently working on my 3rd attempt to integrate the 750GB raid driv=
e.
>> Each attempt takes 4 hours to restore the drive so it has been a lon=
g
>> process. I haven't lost the data yet though so I guess I will keep
>> trying. Hopefully it won't be too corrupt when I am done.
>>
>>
>>
>> On Sat, Apr 2, 2011 at 3:04 PM, Roberto Spadim
r> wrote:
>>> why use raid 5,6? raid1 isn=B4t more secure?
>>>
>>> 2011/4/2 Roman Mamedov :
>>>> On Sat, 2 Apr 2011 22:45:58 +0100
>>>> Simon Mcnair wrote:
>>>>
>>>>> One last thing.... I've never heard of anyone using a raid 05. Wh=
y
>>>>> wouldn't you use a RAID50 ? =A0Please can you dish the dirt on wh=
at
>>>>> benefit there is ? (I would have thought a raid50 would have been
>>>>> better with no disadvantages ?). I thought that raid10 & 50 were =
the
>>>>> main ones in use in 'the industry'.
>>>>
>>>> RAID5/6 with some RAID0 (or JBOD) members is what you use when you=
want to
>>>> include differently-sized devices into the array:
>>>> http://louwrentius.com/blog/2008/08/building-a-raid-6-array- of-mix=
ed-drives/
>>>> --
>>>> With respect,
>>>> Roman
>>>>
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 09:49:00 von NeilBrown
On Sat, 2 Apr 2011 23:41:50 -0700 Marcus wrote:
> Okay I have my raid extended to 1500.3GB however I can't seem to grow
> the partition past 1TB. It will let me create a new partition but it
> won't let me make the current partition any bigger. Does anyone know
> how to fix this?
Best to show exactly the command you use, exactly the results, and details
about the component devices (particularly size).
When using any mdadm command, add "-vv" to make it as verbose as possible.
Include kernel log messages (e.g. dmesg | tail -100)
Prefer to send too much info rather than not enough.
And just place it in-line in the email, no attachments, not 'pastebin' links.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 10:02:40 von marcus
The file system is ext4. The current raid drive is 1.5TB the old size
was 1TB. I can create a new partition on the drive it just wont let me
resize it to a larger size. It seems to be maxed out at 1TB for some
reason.
mdstat shows 1465159552 blocks which is the new size.
fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads,
4 sectors/track, 366289888 cylinders.
Current partition: /dev/md2p1 17 244191968 976767808
83 Linux
resize2fs -p /dev/md2 returns: nothing to do
Nothing is failing it just seems to be at a max size. I also tried
resizing with parted and it seems to think 244191968 is max like
resize2fs does.
On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown wrote:
> On Sat, 2 Apr 2011 23:41:50 -0700 Marcus wrote:
>
>> Okay I have my raid extended to 1500.3GB however I can't seem to grow
>> the partition past 1TB. It will let me create a new partition but it
>> won't let me make the current partition any bigger. Does anyone know
>> how to fix this?
>
> Best to show exactly the command you use, exactly the results, and details
> about the component devices (particularly size).
> When using any mdadm command, add "-vv" to make it as verbose as possible.
> Include kernel log messages (e.g. dmesg | tail -100)
>
> Prefer to send too much info rather than not enough.
> And just place it in-line in the email, no attachments, not 'pastebin' links.
>
> NeilBrown
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 13:01:38 von NeilBrown
On Sun, 3 Apr 2011 01:02:40 -0700 Marcus wrote:
> The file system is ext4. The current raid drive is 1.5TB the old size
> was 1TB. I can create a new partition on the drive it just wont let me
> resize it to a larger size. It seems to be maxed out at 1TB for some
> reason.
What is "it"? What command do you run? What output does it generate?
>
> mdstat shows 1465159552 blocks which is the new size.
Why didn't you just include the complete "cat /proc/mdstat".
That would have been much more informative.
>
> fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads,
> 4 sectors/track, 366289888 cylinders.
>
> Current partition: /dev/md2p1 17 244191968 976767808
> 83 Linux
>
> resize2fs -p /dev/md2 returns: nothing to do
>
Is this "it"?? Do you realise that you need to resize the device "/dev/md2"
before you can resize the filesystem that is stored in "/dev/md2".
> Nothing is failing it just seems to be at a max size. I also tried
> resizing with parted and it seems to think 244191968 is max like
> resize2fs does.
>
As you provided so little concrete details - despite me asking for lots -
I'll have to guess.
I guess that if you
mdadm -S /dev/md2
mdadm -A /dev/md2 --update=device-size /dev/...list.of.devices
mdadm -G /dev/md2 --size=max
resize2fs /dev/md2
then it might work. Or maybe it'll corrupt everything. I cannot really be
sure because I am being forced to guess.
Commands like:
mdadm --examine /dev/*
mdadm --detail /dev/md*
cat /proc/partitions
cat /proc/mdstat
dmesg | tail -100
are the sort of things that are useful - not "I tried something and it didn't
work"...
NeilBrown
(sorry, but I get grumpy when people provide so little information).
>
>
> On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown wrote:
> > On Sat, 2 Apr 2011 23:41:50 -0700 Marcus wrote:
> >
> >> Okay I have my raid extended to 1500.3GB however I can't seem to grow
> >> the partition past 1TB. It will let me create a new partition but it
> >> won't let me make the current partition any bigger. Does anyone know
> >> how to fix this?
> >
> > Best to show exactly the command you use, exactly the results, and details
> > about the component devices (particularly size).
> > When using any mdadm command, add "-vv" to make it as verbose as possible.
> > Include kernel log messages (e.g. dmesg | tail -100)
> >
> > Prefer to send too much info rather than not enough.
> > And just place it in-line in the email, no attachments, not 'pastebin' links.
> >
> > NeilBrown
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 19:46:23 von marcus
I provided you all relevant information if you payed attention to
sizes and the fact that I stated that I can add a new partition to the
device you would have realized that I have already applied grow to the
raid.
1465159552 raid size
976767808 partition size
See how partition is smaller than raid by about 500GB?
nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
resize2fs 1.41.11 (14-Mar-2010)
The filesystem is already 244191952 blocks long. Nothing to do!
There is the exact message resize2fs is returning. 244191968 is the
current end block of the partition. parted also shows 244191968 as the
maximum block size for a partition. There are no related dmesg because
it is not an error it is just undesired results.
mdstat
md0 : active raid0 sdb1[1] sdc1[0]
732579840 blocks 64k chunks
md2 : active raid5 md0[0] sde1[2] sdd1[1]
1465159552 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
I have grown this raid array before it isn't like I am a newbie I just
don't understand why the partition is stuck at 1TB. I keep reading
about 2TB limits but can't find anything relevant to the 1TB limit I
am experiencing.
On Sun, Apr 3, 2011 at 4:01 AM, NeilBrown wrote:
> On Sun, 3 Apr 2011 01:02:40 -0700 Marcus wrote:
>
>> The file system is ext4. The current raid drive is 1.5TB the old siz=
e
>> was 1TB. I can create a new partition on the drive it just wont let =
me
>> resize it to a larger size. It seems to be maxed out at 1TB for some
>> reason.
>
> What is "it"? =A0What command do you run? =A0What output does it gene=
rate?
>
>>
>> mdstat shows 1465159552 blocks which is the new size.
>
> Why didn't you just include the complete "cat /proc/mdstat".
> That would have been much more informative.
>
>>
>> fdisk -l shows Disk /dev/md2: 1500.3 GB, 1500323381248 bytes 2 heads=
,
>> 4 sectors/track, 366289888 cylinders.
>>
>> Current partition: /dev/md2p1 =A0 =A0 =A0 =A0 =A0 =A0 =A017 =A0 2441=
91968 =A0 976767808
>> =A083 =A0Linux
>>
>> resize2fs -p /dev/md2 returns: nothing to do
>>
>
> Is this "it"?? =A0Do you realise that you need to resize the device "=
/dev/md2"
> before you can resize the filesystem that is stored in "/dev/md2".
>
>> Nothing is failing it just seems to be at a max size. I also tried
>> resizing with parted and it seems to think 244191968 is max like
>> resize2fs does.
>>
>
>
> As you provided so little concrete details - despite me asking for lo=
ts -
> I'll have to guess.
>
> I guess that if you
> =A0mdadm -S /dev/md2
> =A0mdadm -A /dev/md2 --update=3Ddevice-size /dev/...list.of.devices
> =A0mdadm -G /dev/md2 --size=3Dmax
> =A0resize2fs /dev/md2
>
> then it might work. =A0Or maybe it'll corrupt everything. =A0I cannot=
really be
> sure because I am being forced to guess.
>
> Commands like:
>
> =A0mdadm --examine /dev/*
> =A0mdadm --detail /dev/md*
> =A0cat /proc/partitions
> =A0cat /proc/mdstat
> =A0dmesg | tail -100
>
> are the sort of things that are useful - not "I tried something and i=
t didn't
> work"...
>
> NeilBrown
>
> (sorry, but I get grumpy when people provide so little information).
>
>
>
>>
>>
>> On Sun, Apr 3, 2011 at 12:49 AM, NeilBrown wrote:
>> > On Sat, 2 Apr 2011 23:41:50 -0700 Marcus wro=
te:
>> >
>> >> Okay I have my raid extended to 1500.3GB however I can't seem to =
grow
>> >> the partition past 1TB. It will let me create a new partition but=
it
>> >> won't let me make the current partition any bigger. Does anyone k=
now
>> >> how to fix this?
>> >
>> > Best to show exactly the command you use, exactly the results, and=
details
>> > about the component devices (particularly size).
>> > When using any mdadm command, add "-vv" to make it as verbose as p=
ossible.
>> > Include kernel log messages (e.g. dmesg | tail -100)
>> >
>> > Prefer to send too much info rather than not enough.
>> > And just place it in-line in the email, no attachments, not 'paste=
bin' links.
>> >
>> > NeilBrown
>> >
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 19:50:49 von Roman Mamedov
--Sig_/8GEDU=ZG5umvtHIu313/Rd6
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
On Sun, 3 Apr 2011 10:46:23 -0700
Marcus wrote:
> I provided you all relevant information if you payed attention to
> sizes and the fact that I stated that I can add a new partition to the
> device you would have realized that I have already applied grow to the
> raid.
>=20
> 1465159552 raid size
> 976767808 partition size
>=20
> See how partition is smaller than raid by about 500GB?
Then why not run "cfdisk /dev/md2" (I recommend the version from "GNU fdisk=
"),
notice that you have a 900GB partition there and 500 GB of free space, then
resize the partition?
> nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
> resize2fs 1.41.11 (14-Mar-2010)
> The filesystem is already 244191952 blocks long. Nothing to do!
>=20
> There is the exact message resize2fs is returning. 244191968 is the
> current end block of the partition. parted also shows 244191968 as the
> maximum block size for a partition. There are no related dmesg because
> it is not an error it is just undesired results.
You don't seem to understand the difference between /dev/md2 and /dev/md2p1.
And also that resize2fs will not resize md2p1, it will only amend the
ext* filesystem so that it takes all of md2p1.
--=20
With respect,
Roman
--Sig_/8GEDU=ZG5umvtHIu313/Rd6
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEARECAAYFAk2Ys3kACgkQTLKSvz+PZwh0lACeOCQpMWiyJrexNN9uPmdo w+2f
o7UAnju3lJpRnF3do9GwSP9FYAjFxC6j
=eCuH
-----END PGP SIGNATURE-----
--Sig_/8GEDU=ZG5umvtHIu313/Rd6--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid 5 Array
am 03.04.2011 21:57:08 von Roman Mamedov
--Sig_/6NxEwQ0Q7f7GivFySWZ0LZ9
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sun, 3 Apr 2011 11:56:29 -0700
Marcus wrote:
> Thanks! resizing the partition first worked. I had to use fdisk
> instead of cfdisk though because my partition didn't start at the
> beginning of the drive it started 17 blocks in.
Use "Reply to all" properly in your client, don't just drop all "CC:" at
will, people who tried to help you might be interested to know that the
problem is solved and what was the solution.
>=20
> On Sun, Apr 3, 2011 at 10:50 AM, Roman Mamedov wrote:
> > On Sun, 3 Apr 2011 10:46:23 -0700
> > Marcus wrote:
> >
> >> I provided you all relevant information if you payed attention to
> >> sizes and the fact that I stated that I can add a new partition to the
> >> device you would have realized that I have already applied grow to the
> >> raid.
> >>
> >> 1465159552 raid size
> >> 976767808 partition size
> >>
> >> See how partition is smaller than raid by about 500GB?
> >
> > Then why not run "cfdisk /dev/md2" (I recommend the version from "GNU
> > fdisk"), notice that you have a 900GB partition there and 500 GB of free
> > space, then resize the partition?
> >
> >> nexuslite@ubuntu:~$ resize2fs -p /dev/md2p1
> >> resize2fs 1.41.11 (14-Mar-2010)
> >> The filesystem is already 244191952 blocks long. Â Nothing to do!
> >>
> >> There is the exact message resize2fs is returning. 244191968 is the
> >> current end block of the partition. parted also shows 244191968 as the
> >> maximum block size for a partition. There are no related dmesg because
> >> it is not an error it is just undesired results.
> >
> > You don't seem to understand the difference between /dev/md2
> > and /dev/md2p1. And also that resize2fs will not resize md2p1, it will
> > only amend the ext* filesystem so that it takes all of md2p1.
> >
> > --
> > With respect,
> > Roman
> >
--=20
With respect,
Roman
--Sig_/6NxEwQ0Q7f7GivFySWZ0LZ9
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEARECAAYFAk2Y0RQACgkQTLKSvz+PZwg7uACeMA4rik9CSwpTnzTwBHYW MFAO
MuYAnjfIeKviZy/CfRAOw2JLWK58mMvY
=zidi
-----END PGP SIGNATURE-----
--Sig_/6NxEwQ0Q7f7GivFySWZ0LZ9--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html