Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 18.02.2011 21:55:05 von Larry Schwerzler
I have a few questions about my raid array that I haven't been able to
find=A0definitive=A0answers for, so thought I would ask here.
My setup:
* 8x 1TB drives in an external enclosure connected to my server via 2
esata cables.
* Currently all 8 drives are included in a raid 6 array.
* I use the array to serve files (mostly larger .mkv/iso (several GB)
and .flac/.mp3 (5-50MB) files) over my network via NFS and to perform
offsite backup via rsync over ssh of another server.
* This is a system in my home, so prolonged downtime, while annoying,
is not the end of the world.
* If it matters Ubuntu 10.04 64bit server is my distro
I'm considering and likely going to move forward moving my data and
rebuilding the array as a raid10 array. Just a few questions before I
make the switch.
Questions:
1. In my research of raid10 I very seldom hear of drive configurations
with more drives then 4, are there special considerations with having
an 8 drive raid10 array? I understand that I'll be loosing 2TB of
space from my current setup, but i'm not too worried about that.
2. One problem I'm having with my current setup is the esata cables
have been knocked loose which effectively drops 4 of my drives. I'd
really like to be able to survive this type of sudden drive loss. if
my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
while efgh are on the other is there what drive order should I create
the array with? I'd guess /dev/sd[aebfcgdh] would that give me
survivability if one of my esata channels went dark?
3. One of the concerns I have with raid10 is expandability, and I'm
glad to see reshaping raid10 as an item on the 2011 roadmap :) However
it will likely be a while before I'll see that ability in my distro
for a while. I did find a guide on expanding raid size when using lvm
by increasing the size of each drive and creating two partitions 1 the
size of the original drive, and one with the remainder of the new
space. Once you have done this for all drives you create a new raid10
array with the 2nd partitions on all the drives and add it to the lvm
storage group, effectively you have two raid10 arrays 1 on the first
half of the drives 1 on the 2nd half of the drives and the space
pooled together. I'm sure many of you are familiar with this scenario,
but I'm wondering if this scenario could be problematic, is having two
raid10 arrays on one drive an issue?
4. Part of the reason I'm wanting to switch is because of information
I read on the "BAARF" site pointing out some of the issues in the
parity raid's that can cause issues that people sometimes don't think
about. (site:=A0http://www.miracleas.com/BAARF/BAARF2.html) A lot of th=
e
information on the site is a few years old now and given how fast
things can change and the fact that I have not found many people
complaining about the parity raids I'm wondering if some/all of the
gotchas that they list are less of an issue now? Maybe my reasons for
moving to raid10 are no longer relevant?
Thank you in advance for any/all information given. And a big thank
you to Neil and the other developers of linux-raid for=A0their=A0effort=
s
on this great tool.
Larry Schwerzler
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 00:44:24 von Stan Hoeppner
Larry Schwerzler put forth on 2/18/2011 2:55 PM:
> 1. In my research of raid10 I very seldom hear of drive configurations
> with more drives then 4, are there special considerations with having
> an 8 drive raid10 array? I understand that I'll be loosing 2TB of
> space from my current setup, but i'm not too worried about that.
This is because Linux mdraid is most popular with the hobby crowd, not
business, and most folks in this segment aren't running more than 4
drives in a RAID 10. For business solutions using embedded Linux and
mdraid, mdraid is typically hidden from the user who isn't going to be
writing posts on the net about mdraid. He calls his vendor for support.
In a nutshell, that's why you see little or no posts about mdraid 10
arrays larger than 4 drives.
> 2. One problem I'm having with my current setup is the esata cables
> have been knocked loose which effectively drops 4 of my drives. I'd
> really like to be able to survive this type of sudden drive loss. if
Solve the problem then--quit kicking the cables, or secure them in a
manner that they can't be kicked loose. Or buy a new chassis that can
hold all drives internally. Software cannot solve or work around this
problem. This is actually quite silly to ask. Similarly, would you ask
your car manufacturer to build a car that floats and has a propeller,
because you keep driving off the road into ponds?
> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
> while efgh are on the other is there what drive order should I create
> the array with? I'd guess /dev/sd[aebfcgdh] would that give me
> survivability if one of my esata channels went dark?
On a cheap SATA PCIe card, if one channel goes, they both typically go,
as it's a single chip solution and the PHYs are built into the chip.
However, given your penchant for kicking cables out of their ports, you
might physically damage the connector. So you might want to create the
layout so your mirror pairs are on opposite ports.
> 3. One of the concerns I have with raid10 is expandability, and I'm
> glad to see reshaping raid10 as an item on the 2011 roadmap :) However
> it will likely be a while before I'll see that ability in my distro
> for a while. I did find a guide on expanding raid size when using lvm
> by increasing the size of each drive and creating two partitions 1 the
> size of the original drive, and one with the remainder of the new
> space. Once you have done this for all drives you create a new raid10
> array with the 2nd partitions on all the drives and add it to the lvm
> storage group, effectively you have two raid10 arrays 1 on the first
> half of the drives 1 on the 2nd half of the drives and the space
> pooled together. I'm sure many of you are familiar with this scenario,
> but I'm wondering if this scenario could be problematic, is having two
> raid10 arrays on one drive an issue?
Reshaping requires you have a full good backup for when it all goes
wrong. Most home users don't keep backups. If you kick the cable
during a reshape you may hose everything and have to start over from
scratch. If you don't/won't/or can't keep a regular full backup, then
don't do a reshape. Simply add new drives, create a new mdraid if you
like, make a filesystem, and mount it somewhere. Others will likely
give different advice. If you need to share it via samba or nfs, create
another share. For those who like everything in one "tree" you can
simply create a new directory "inside" your current array filesystem and
mount the new one there. Unix is great like this. Many Linux nubies
forget this capability, or never learned it.
> 4. Part of the reason I'm wanting to switch is because of information
> I read on the "BAARF" site pointing out some of the issues in the
> parity raid's that can cause issues that people sometimes don't think
> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the
> information on the site is a few years old now and given how fast
> things can change and the fact that I have not found many people
> complaining about the parity raids I'm wondering if some/all of the
> gotchas that they list are less of an issue now? Maybe my reasons for
> moving to raid10 are no longer relevant?
You need to worry far more about your cabling situation. Kicking a
cable out is what can/will cause data loss. At this point that is far
more detrimental to you than the RAID 5/6 invisible data loss issue.
Always fix the big problems first. The RAID level you use is the least
of yours right now.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 01:54:39 von Keld Simonsen
On Fri, Feb 18, 2011 at 05:44:24PM -0600, Stan Hoeppner wrote:
> Larry Schwerzler put forth on 2/18/2011 2:55 PM:
>
> > 1. In my research of raid10 I very seldom hear of drive configurations
> > with more drives then 4, are there special considerations with having
> > an 8 drive raid10 array? I understand that I'll be loosing 2TB of
> > space from my current setup, but i'm not too worried about that.
>
> This is because Linux mdraid is most popular with the hobby crowd, not
> business, and most folks in this segment aren't running more than 4
> drives in a RAID 10. For business solutions using embedded Linux and
> mdraid, mdraid is typically hidden from the user who isn't going to be
> writing posts on the net about mdraid. He calls his vendor for support.
> In a nutshell, that's why you see little or no posts about mdraid 10
> arrays larger than 4 drives.
well on https://raid.wiki.kernel.org/index.php/Performance
there are several performance reports with 6 or 10 spindles, so there...
For an 8 drive Linux MD raid10 maybe you should consider a motherboard
with 8 sata ports.
Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 02:12:44 von Joe Landman
On 02/18/2011 03:55 PM, Larry Schwerzler wrote:
[...]
> Questions:
>
> 1. In my research of raid10 I very seldom hear of drive configurations
> with more drives then 4, are there special considerations with having
> an 8 drive raid10 array? I understand that I'll be loosing 2TB of
> space from my current setup, but i'm not too worried about that.
If you are going to set this up, I'd suggest a few things.
1st: try to use a PCI HBA with enough ports, not the motherboard ports.
2nd: eSATA is probably not a good idea (see your issue below).
3rd: I'd suggest getting 10 drives and using 2 as hot spares. Again,
not using eSATA. Use an internal PCIe card that provides a reasonable
chip. If you can't house the drives internal to your machine, get a x4
or x8 JBOD/RAID cannister. A single (or possibly 2) SAS cables. But
seriously, lose the eSATA setup.
>
> 2. One problem I'm having with my current setup is the esata cables
> have been knocked loose which effectively drops 4 of my drives. I'd
> really like to be able to survive this type of sudden drive loss. if
> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
> while efgh are on the other is there what drive order should I create
> the array with? I'd guess /dev/sd[aebfcgdh] would that give me
> survivability if one of my esata channels went dark?
Usually the on-board eSATA chips are very low cost, low bandwidth units.
Spend another $150-200 on a dual external SAS HBA, and get the JBOD
container.
>
> 3. One of the concerns I have with raid10 is expandability, and I'm
> glad to see reshaping raid10 as an item on the 2011 roadmap :) However
> it will likely be a while before I'll see that ability in my distro
> for a while. I did find a guide on expanding raid size when using lvm
> by increasing the size of each drive and creating two partitions 1 the
> size of the original drive, and one with the remainder of the new
> space. Once you have done this for all drives you create a new raid10
> array with the 2nd partitions on all the drives and add it to the lvm
> storage group, effectively you have two raid10 arrays 1 on the first
> half of the drives 1 on the 2nd half of the drives and the space
> pooled together. I'm sure many of you are familiar with this scenario,
> but I'm wondering if this scenario could be problematic, is having two
> raid10 arrays on one drive an issue?
We'd recommend against this. Too much seeking.
>
> 4. Part of the reason I'm wanting to switch is because of information
> I read on the "BAARF" site pointing out some of the issues in the
> parity raid's that can cause issues that people sometimes don't think
> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of the
> information on the site is a few years old now and given how fast
> things can change and the fact that I have not found many people
> complaining about the parity raids I'm wondering if some/all of the
> gotchas that they list are less of an issue now? Maybe my reasons for
> moving to raid10 are no longer relevant?
Things have gotten worse. The BERs are improving a bit (most reasonable
SATA drives report 1E-15 as their rate as compared with 1E-14 as
previously. Remember, 2TB = 1.6E13 bits. So 10x 2TB drives together is
1.6E14 bits. 8 scans or rebuilds will get you to a statistical near
certainty of hitting an unrecoverable error.
RAID6 buys you a little more time than RAID5, but you still have worries
due to the time correlated second drive failure. Google found a peak at
1000s after the first drive failure (which likely corresponds to an
error on rebuild). With RAID5, that second error is the end of your
data. With RAID6, you still have a fighting chance at recovery.
> Thank you in advance for any/all information given. And a big thank
> you to Neil and the other developers of linux-raid for their efforts
> on this great tool.
Despite the occasional protestations to the contrary, MD raid is a
robust and useful RAID layer, and not a "hobby" layer. We use it
extensively, as do many others.
--
Joe Landman
landman@scalableinformatics.com
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 02:33:24 von Larry Schwerzler
Joe, thanks for the info, response/questions inline.
On Fri, Feb 18, 2011 at 5:12 PM, Joe Landman wr=
ote:
> On 02/18/2011 03:55 PM, Larry Schwerzler wrote:
>
> [...]
>
>> Questions:
>>
>> 1. In my research of raid10 I very seldom hear of drive configuratio=
ns
>> with more drives then 4, are there special considerations with havin=
g
>> an 8 drive raid10 array? I understand that I'll be loosing 2TB of
>> space from my current setup, but i'm not too worried about that.
>
> If you are going to set this up, I'd suggest a few things.
>
> 1st: try to use a PCI HBA with enough ports, not the motherboard port=
s.
I use the SANS DIGITAL HA-DAT-4ESPCIE PCI-Express x8 SATA II card with
the SANS DIGITAL TR8M-B 8 Bay SATA to eSATA (Port Multiplier) JBOD
Enclosure, so i'm most of the way there, just esata instead of sas. I
didn't realize that the esata connections had issues like this else I
would have avoided it, though at the time the extra cost of a sas card
that could expand to a total of 16 external hard drives would have
been prohibitive.
>
> 2nd: eSATA is probably not a good idea (see your issue below).
>
> 3rd: I'd suggest getting 10 drives and using 2 as hot spares. =A0Agai=
n, not
> using eSATA. =A0Use an internal PCIe card that provides a reasonable =
chip. =A0If
> you can't house the drives internal to your machine, get a x4 or x8
> JBOD/RAID cannister. =A0A single (or possibly 2) SAS cables. =A0But s=
eriously,
> lose the eSATA setup.
I may see about getting an extra drive or two to act as hot spares.
>
>>
>> 2. One problem I'm having with my current setup is the esata cables
>> have been knocked loose which effectively drops 4 of my drives. I'd
>> really like to be able to survive this type of sudden drive loss. if
>> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
>> while efgh are on the other is there what drive order should I creat=
e
>> the array with? I'd guess /dev/sd[aebfcgdh] would that give me
>> survivability if one of my esata channels went dark?
>
> Usually the on-board eSATA chips are very low cost, low bandwidth uni=
ts.
> =A0Spend another $150-200 on a dual external SAS HBA, and get the JBO=
D
> container.
I'd be interested in any specific recommendations anyone might have
for a $200 or so card and jbod enclosure that could house at least 8
drives. Off list is fine, so as to not spam the list.
I have zero experience with SAS, does it not experience the issues
that my esata setup runs into?
>
>>
>> 3. One of the concerns I have with raid10 is expandability, and I'm
>> glad to see reshaping raid10 as an item on the 2011 roadmap :) Howev=
er
>> it will likely be a while before I'll see that ability in my distro
>> for a while. I did find a guide on expanding raid size when using lv=
m
>> by increasing the size of each drive and creating two partitions 1 t=
he
>> size of the original drive, and one with the remainder of the new
>> space. Once you have done this for all drives you create a new raid1=
0
>> array with the 2nd partitions on all the drives and add it to the lv=
m
>> storage group, effectively you have two raid10 arrays 1 on the first
>> half of the drives 1 on the 2nd half of the drives and the space
>> pooled together. I'm sure many of you are familiar with this scenari=
o,
>> but I'm wondering if this scenario could be problematic, is having t=
wo
>> raid10 arrays on one drive an issue?
>
> We'd recommend against this. =A0Too much seeking.
So the raid10 expansion solution is again to wait for raid10 reshaping
in the mdraid tools, or start from scratch.
I thought that maybe with LVM since it wouldn't be striping the data
accross the arrays, it would mostly be accessing the info from one
array at a time. I don't know enough about the way that lvm stores the
data to know different though.
>
>>
>> 4. Part of the reason I'm wanting to switch is because of informatio=
n
>> I read on the "BAARF" site pointing out some of the issues in the
>> parity raid's that can cause issues that people sometimes don't thin=
k
>> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of t=
he
>> information on the site is a few years old now and given how fast
>> things can change and the fact that I have not found many people
>> complaining about the parity raids I'm wondering if some/all of the
>> gotchas that they list are less of an issue now? Maybe my reasons fo=
r
>> moving to raid10 are no longer relevant?
>
> Things have gotten worse. =A0The BERs are improving a bit (most reaso=
nable
> SATA drives report 1E-15 as their rate as compared with 1E-14 as prev=
iously.
> =A0Remember, 2TB =3D 1.6E13 bits. =A0So 10x 2TB drives together is 1.=
6E14 bits. =A08
> scans or rebuilds will get you to a statistical near certainty of hit=
ting an
> unrecoverable error.
>
> RAID6 buys you a little more time than RAID5, but you still have worr=
ies due
> to the time correlated second drive failure. =A0Google found a peak a=
t 1000s
> after the first drive failure (which likely corresponds to an error o=
n
> rebuild). =A0With RAID5, that second error is the end of your data. =A0=
With
> RAID6, you still have a fighting chance at recovery.
>
This i what really scares me, it seems like a false sense of security
as your drive size increases. Hoping for a better chance with raid10
>
>> Thank you in advance for any/all information given. And a big thank
>> you to Neil and the other developers of linux-raid for their efforts
>> on this great tool.
>
> Despite the occasional protestations to the contrary, MD raid is a ro=
bust
> and useful RAID layer, and not a "hobby" layer. =A0We use it extensiv=
ely, as
> do many others.
>
>
> --
> Joe Landman
> landman@scalableinformatics.com
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 02:50:47 von Larry Schwerzler
On Fri, Feb 18, 2011 at 3:44 PM, Stan Hoeppner =
wrote:
> Larry Schwerzler put forth on 2/18/2011 2:55 PM:
>
>> 1. In my research of raid10 I very seldom hear of drive configuratio=
ns
>> with more drives then 4, are there special considerations with havin=
g
>> an 8 drive raid10 array? I understand that I'll be loosing 2TB of
>> space from my current setup, but i'm not too worried about that.
>
> This is because Linux mdraid is most popular with the hobby crowd, no=
t
> business, and most folks in this segment aren't running more than 4
> drives in a RAID 10. =A0For business solutions using embedded Linux a=
nd
> mdraid, mdraid is typically hidden from the user who isn't going to b=
e
> writing posts on the net about mdraid. =A0He calls his vendor for sup=
port.
> =A0In a nutshell, that's why you see little or no posts about mdraid =
10
> arrays larger than 4 drives.
>
Gotcha so no specific issues, thanks.
>> 2. One problem I'm having with my current setup is the esata cables
>> have been knocked loose which effectively drops 4 of my drives. I'd
>> really like to be able to survive this type of sudden drive loss. if
>
> Solve the problem then--quit kicking the cables, or secure them in a
> manner that they can't be kicked loose. =A0Or buy a new chassis that =
can
> hold all drives internally. =A0Software cannot solve or work around t=
his
> problem. =A0This is actually quite silly to ask. =A0Similarly, would =
you ask
> your car manufacturer to build a car that floats and has a propeller,
> because you keep driving off the road into ponds?
I'm working on securing the cables, but sometimes there are things
beyond your control and I'd like to protect against a possible issue,
rather then just throw up my hands and say, well this won't work, I
oviously need a whole new setup. If I can get some of the protection
from mdraid awesome, if not, well at least I'll know.
Your example is a bit off, it would be more like asking my car
manufacturer if the big button that says "float" could be used for
when I occasionally drive into ponds.
I'm not asking anyone to change the code just to protect me from my
poor buying choices, just wondering if the tool has the ability to
help me.
>
>> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
>> while efgh are on the other is there what drive order should I creat=
e
>> the array with? I'd guess /dev/sd[aebfcgdh] would that give me
>> survivability if one of my esata channels went dark?
>
> On a cheap SATA PCIe card, if one channel goes, they both typically g=
o,
> as it's a single chip solution and the PHYs are built into the chip.
> However, given your penchant for kicking cables out of their ports, y=
ou
> might physically damage the connector. =A0So you might want to create=
the
> layout so your mirror pairs are on opposite ports.
>
Not sure if I have a cheap esata card (SANS DIGITAL HA-DAT-4ESPCIE
PCI-Express x8 SATA II) but when one of the cables has come out the
drives on the other cable work fine, so I'd guess my chipset doesn't
fall into that scenario.
I for sure want to create the pairs on opposite ports, but I was
unclear what drive order durring the create procedure would actually
do that given an f2 layout.
>> 3. One of the concerns I have with raid10 is expandability, and I'm
>> glad to see reshaping raid10 as an item on the 2011 roadmap :) Howev=
er
>> it will likely be a while before I'll see that ability in my distro
>> for a while. I did find a guide on expanding raid size when using lv=
m
>> by increasing the size of each drive and creating two partitions 1 t=
he
>> size of the original drive, and one with the remainder of the new
>> space. Once you have done this for all drives you create a new raid1=
0
>> array with the 2nd partitions on all the drives and add it to the lv=
m
>> storage group, effectively you have two raid10 arrays 1 on the first
>> half of the drives 1 on the 2nd half of the drives and the space
>> pooled together. I'm sure many of you are familiar with this scenari=
o,
>> but I'm wondering if this scenario could be problematic, is having t=
wo
>> raid10 arrays on one drive an issue?
>
> Reshaping requires you have a full good backup for when it all goes
> wrong. =A0Most home users don't keep backups. =A0If you kick the cabl=
e
> during a reshape you may hose everything and have to start over from
> scratch. =A0If you don't/won't/or can't keep a regular full backup, t=
hen
> don't do a reshape. =A0Simply add new drives, create a new mdraid if =
you
> like, make a filesystem, and mount it somewhere. =A0Others will likel=
y
> give different advice. =A0If you need to share it via samba or nfs, c=
reate
> another share. =A0For those who like everything in one "tree" you can
> simply create a new directory "inside" your current array filesystem =
and
> mount the new one there. =A0Unix is great like this. =A0Many Linux nu=
bies
> forget this capability, or never learned it.
>
I understand reshaping is tricky, and I do keep backups of the
critical data. But much of my data are movies that I own and use to
play over the network for my home media server. I don't back these up
because if I lose them all I just get to spend a lot of evenings
re-ripping the movies, which sucks but isn't as bad as losing the
photos etc.
Without the LVM raid expansion solution the expansion for me looks
like. Buy another jbod raid enclosure that holds 8 drives (or get
another computer case that holds 8 drives + system HD + dvd drive and
another mobo that can support 10 sata devices) setup the 8 new drives,
copy the data from the old drives, retire the old drives, sell the
extra jbod enclosure.
I was hoping I have the same effect withou buying the extra jbod
enclosure, but raid10 can't reshape.
>> 4. Part of the reason I'm wanting to switch is because of informatio=
n
>> I read on the "BAARF" site pointing out some of the issues in the
>> parity raid's that can cause issues that people sometimes don't thin=
k
>> about. (site: http://www.miracleas.com/BAARF/BAARF2.html) A lot of t=
he
>> information on the site is a few years old now and given how fast
>> things can change and the fact that I have not found many people
>> complaining about the parity raids I'm wondering if some/all of the
>> gotchas that they list are less of an issue now? Maybe my reasons fo=
r
>> moving to raid10 are no longer relevant?
>
> You need to worry far more about your cabling situation. =A0Kicking a
> cable out is what can/will cause data loss. =A0At this point that is =
far
> more detrimental to you than the RAID 5/6 invisible data loss issue.
>
> Always fix the big problems first. =A0The RAID level you use is the l=
east
> of yours right now.
>
> --
> Stan
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 02:53:06 von Larry Schwerzler
2011/2/18 Keld J=F8rn Simonsen :
> On Fri, Feb 18, 2011 at 05:44:24PM -0600, Stan Hoeppner wrote:
>> Larry Schwerzler put forth on 2/18/2011 2:55 PM:
>>
>> > 1. In my research of raid10 I very seldom hear of drive configurat=
ions
>> > with more drives then 4, are there special considerations with hav=
ing
>> > an 8 drive raid10 array? I understand that I'll be loosing 2TB of
>> > space from my current setup, but i'm not too worried about that.
>>
>> This is because Linux mdraid is most popular with the hobby crowd, n=
ot
>> business, and most folks in this segment aren't running more than 4
>> drives in a RAID 10. =A0For business solutions using embedded Linux =
and
>> mdraid, mdraid is typically hidden from the user who isn't going to =
be
>> writing posts on the net about mdraid. =A0He calls his vendor for su=
pport.
>> =A0In a nutshell, that's why you see little or no posts about mdraid=
10
>> arrays larger than 4 drives.
>
> well on https://raid.wiki.kernel.org/index.php/Performance
> there are several performance reports with 6 or 10 spindles, so there=
..
>
> For an 8 drive Linux MD raid10 maybe you should consider a motherboar=
d
> with 8 sata ports.
>
While I have considered getting a new case that can hold 8 drives +
system drive + cd rom I always had trouble finding them. There are no
doubt better setups then mine, but Im trying to not buy new hardware
if I can get away with it.
> Best regards
> keld
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drivearray
am 19.02.2011 04:59:53 von NeilBrown
On Fri, 18 Feb 2011 12:55:05 -0800 Larry Schwerzler
wrote:
> 2. One problem I'm having with my current setup is the esata cables
> have been knocked loose which effectively drops 4 of my drives. I'd
> really like to be able to survive this type of sudden drive loss. if
> my drives are /dev/sd[abcdefgh] and abcd are on one esata channel
> while efgh are on the other is there what drive order should I create
> the array with? I'd guess /dev/sd[aebfcgdh] would that give me
> survivability if one of my esata channels went dark?
Yes, in md/raid10 the multiple copies are an 'adjacent' devices (in the
sequence given to --create).
Of course, you wouldn't actually use the string /dev/sd[aebfcgdh]
as that expands matches in alphabetical order.:
$ echo /dev/sd[aebfcgdh]
/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
Instead use this:
$ echo /dev/sd{a,e,b,f,c,g,d,h}
/dev/sda /dev/sde /dev/sdb /dev/sdf /dev/sdc /dev/sdg /dev/sdd /dev/sdh
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 19.02.2011 05:33:39 von Stan Hoeppner
Larry Schwerzler put forth on 2/18/2011 7:53 PM:
> While I have considered getting a new case that can hold 8 drives +
> system drive + cd rom I always had trouble finding them. There are no
> doubt better setups then mine, but Im trying to not buy new hardware
> if I can get away with it.
Are you mechanically inclined in the slightest? You can fix the "cable
kick" problem for less than $5 with these:
http://www.lowes.com/ProductDisplay?partNumber=292685-1781-4 5-1MBUVL&langId=-1&storeId=10151&productId=3128405&catalogId =10051&cmRelshp=rel&rel=nofollow&cId=PDIO1
and these:
http://www.lowes.com/pd_220871-1781-45-311UVL_0__?productId= 3128261&Ntt=cable+tie&pl=1¤tURL=%2Fpl__0__s%3FVa%3Dtru e%26Ntt%3Dcable%2Btie
and have most of them left over for other uses. You'll get strain
relief and kick protection, especially if you use two on each chassis.
Though, if you are actually kicking or tripping over the cable, you'll
simply end up jerking your equipment off the table and damaging it,
instead of just having the eSATA plug pop out.
I'm really curious to understand why/how your cables are exposed to
"kicking" or other detachment due to accidental contact.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Raid selection questions (10 vs 6, n2 vs f2) on an 8 drive array
am 20.02.2011 10:57:26 von Simon McNair
Sorry, I can't help responding to this. I love any post that go's
back to cable ties. Get as techy as you like, behind the scenes there
WILL be cable ties (or posh Velcro ties, my personal favourite )
somewhere holding the whole kit and caboodle together ;-)
just an off topic attempt at humour :-)
Simon
2011/2/19 Stan Hoeppner :
> Larry Schwerzler put forth on 2/18/2011 7:53 PM:
>
>> While I have considered getting a new case that can hold 8 drives +
>> system drive + cd rom I always had trouble finding them. There are n=
o
>> doubt better setups then mine, but Im trying to not buy new hardware
>> if I can get away with it.
>
> Are you mechanically inclined in the slightest? =A0You can fix the "c=
able
> kick" problem for less than $5 with these:
>
> http://www.lowes.com/ProductDisplay?partNumber=3D292685-1781 -45-1MBUV=
L&langId=3D-1&storeId=3D10151&productId=3D3128405&catalogId= 3D10051&cmR=
elshp=3Drel&rel=3Dnofollow&cId=3DPDIO1
>
> and these:
>
> http://www.lowes.com/pd_220871-1781-45-311UVL_0__?productId= 3D3128261=
&Ntt=3Dcable+tie&pl=3D1¤tURL=3D%2Fpl__0__s%3FVa%3Dtrue %26Ntt%3Dca=
ble%2Btie
>
> and have most of them left over for other uses. =A0You'll get strain
> relief and kick protection, especially if you use two on each chassis=
> Though, if you are actually kicking or tripping over the cable, you'l=
l
> simply end up jerking your equipment off the table and damaging it,
> instead of just having the eSATA plug pop out.
>
> I'm really curious to understand why/how your cables are exposed to
> "kicking" or other detachment due to accidental contact.
>
> --
> Stan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html