Which Disks can fail?
am 21.06.2011 12:24:20 von Jonathan Tripathy
Hi Everyone,
Use md's "single process" RAID10 with the standard near layout (which is
apperently the same as RAID1+0 in industry), which 2 drives could fail
without loosing the array?
This is what I have:
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 21.06.2011 12:45:09 von NeilBrown
On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy
wrote:
> Hi Everyone,
>
> Use md's "single process" RAID10 with the standard near layout (which is
> apperently the same as RAID1+0 in industry), which 2 drives could fail
> without loosing the array?
>
> This is what I have:
>
> Number Major Minor RaidDevice State
> 0 8 5 0 active sync /dev/sda5
> 1 8 21 1 active sync /dev/sdb5
> 2 8 37 2 active sync /dev/sdc5
> 3 8 53 3 active sync /dev/sdd5
>
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Run
man 4 md
search for "RAID10"
read what you find, and if it doesn't make sense, ask again.
If it does make sense, post your answer and feel free to ask for
confirmation.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 21.06.2011 12:56:41 von Jonathan Tripathy
On 21/06/2011 11:45, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy
> wrote:
>
>> Hi Everyone,
>>
>> Use md's "single process" RAID10 with the standard near layout (which is
>> apperently the same as RAID1+0 in industry), which 2 drives could fail
>> without loosing the array?
>>
>> This is what I have:
>>
>> Number Major Minor RaidDevice State
>> 0 8 5 0 active sync /dev/sda5
>> 1 8 21 1 active sync /dev/sdb5
>> 2 8 37 2 active sync /dev/sdc5
>> 3 8 53 3 active sync /dev/sdd5
>>
>> Thanks
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Run
>
> man 4 md
>
> search for "RAID10"
>
> read what you find, and if it doesn't make sense, ask again.
> If it does make sense, post your answer and feel free to ask for
> confirmation.
>
>
> NeilBrown
Sorry, it still doesn't make much sense to me I'm afraid.
In fact, it's confused me more - since I'm using "near", does that means
that the "copy" (I'm using near=2) of a given trunk may lie on the same
disk, leading to *no redundancy*??
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 21.06.2011 13:42:25 von NeilBrown
On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy
wrote:
>
> On 21/06/2011 11:45, NeilBrown wrote:
> > On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy
> > wrote:
> >
> >> Hi Everyone,
> >>
> >> Use md's "single process" RAID10 with the standard near layout (which is
> >> apperently the same as RAID1+0 in industry), which 2 drives could fail
> >> without loosing the array?
> >>
> >> This is what I have:
> >>
> >> Number Major Minor RaidDevice State
> >> 0 8 5 0 active sync /dev/sda5
> >> 1 8 21 1 active sync /dev/sdb5
> >> 2 8 37 2 active sync /dev/sdc5
> >> 3 8 53 3 active sync /dev/sdd5
> >>
> >> Thanks
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Run
> >
> > man 4 md
> >
> > search for "RAID10"
> >
> > read what you find, and if it doesn't make sense, ask again.
> > If it does make sense, post your answer and feel free to ask for
> > confirmation.
> >
> >
> > NeilBrown
> Sorry, it still doesn't make much sense to me I'm afraid.
>
> In fact, it's confused me more - since I'm using "near", does that means
> that the "copy" (I'm using near=2) of a given trunk may lie on the same
> disk, leading to *no redundancy*??
Clearly I need to improve the man page... (suggestions welcome).
How do you read it that the copies of a given chunk may lie on the same disk.
I read:
When 'near' replicas are chosen, the multiple copies of a given chunk
are laid out consecutively across the stripes of the array, so the two
copies of a datablock will likely be at the same offset on two adjacent
devices.
"laid out consecutively across the stripes of the array" might be a bit
obscure.. A stripe is one chunk on each device, so when chunks a laid out
consecutively across a stripe, they would be one chunk per device.
Then "likely be at the same offset on two adjacent devices" should make this
clearer. It is only "likely" because if you have an odd number of devices,
then the 2 copies of one chunk could be
a/ at offset X on the last device
b/ at offset X+chunk on the first device
but in general, they are on "adjacent devices"
So in answer to your original question, sda5 and sdb5 will have the same
data, and sdc5 and sdd5 will also have the same data.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 21.06.2011 13:57:22 von Jonathan Tripathy
On 21/06/2011 12:42, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy
uk>
> wrote:
>
>> On 21/06/2011 11:45, NeilBrown wrote:
>>> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy
o.uk>
>>> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> Use md's "single process" RAID10 with the standard near layout (wh=
ich is
>>>> apperently the same as RAID1+0 in industry), which 2 drives could =
fail
>>>> without loosing the array?
>>>>
>>>> This is what I have:
>>>>
>>>> Number Major Minor RaidDevice State
>>>> 0 8 5 0 active sync /dev/sda5
>>>> 1 8 21 1 active sync /dev/sdb5
>>>> 2 8 37 2 active sync /dev/sdc5
>>>> 3 8 53 3 active sync /dev/sdd5
>>>>
>>>> Thanks
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-ra=
id" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> Run
>>>
>>> man 4 md
>>>
>>> search for "RAID10"
>>>
>>> read what you find, and if it doesn't make sense, ask again.
>>> If it does make sense, post your answer and feel free to ask for
>>> confirmation.
>>>
>>>
>>> NeilBrown
>> Sorry, it still doesn't make much sense to me I'm afraid.
>>
>> In fact, it's confused me more - since I'm using "near", does that m=
eans
>> that the "copy" (I'm using near=3D2) of a given trunk may lie on the=
same
>> disk, leading to *no redundancy*??
> Clearly I need to improve the man page... (suggestions welcome).
>
> How do you read it that the copies of a given chunk may lie on the sa=
me disk.
> I read:
>
> When 'near' replicas are chosen, the multiple copies of a g=
iven chunk
> are laid out consecutively across the stripes of the array, s=
o the two
> copies of a datablock will likely be at the same offset on tw=
o adjacent
> devices.
>
> "laid out consecutively across the stripes of the array" might be a b=
it
> obscure.. A stripe is one chunk on each device, so when chunks a lai=
d out
> consecutively across a stripe, they would be one chunk per device.
>
> Then "likely be at the same offset on two adjacent devices" should ma=
ke this
> clearer. It is only "likely" because if you have an odd number of de=
vices,
> then the 2 copies of one chunk could be
> a/ at offset X on the last device
> b/ at offset X+chunk on the first device
>
> but in general, they are on "adjacent devices"
>
> So in answer to your original question, sda5 and sdb5 will have the s=
ame
> data, and sdc5 and sdd5 will also have the same data.
>
> NeilBrown
>
Hi Neil,
It was the lines in the "far" sections that made me have my doubts:
"When =E2far=E2 replicas are chosen, the multiple copies =
of a=20
given chunk are laid out quite distant from each other. The
first copy of all data blocks will be striped across the early=20
part of all drives in RAID0 fashion, and then the next copy
of all blocks will be striped across a later section of all=20
drives, *always ensuring that all copies of any given block are
on different drives.*"
The highlighted part made me think that there would be a chance that=20
chunks would be on the same drive in near
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 21.06.2011 17:31:49 von Jonathan Tripathy
On 21/06/2011 12:42, NeilBrown wrote:
> On Tue, 21 Jun 2011 11:56:41 +0100 Jonathan Tripathy
> wrote:
>
>> On 21/06/2011 11:45, NeilBrown wrote:
>>> On Tue, 21 Jun 2011 11:24:20 +0100 Jonathan Tripathy
>>> wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> Use md's "single process" RAID10 with the standard near layout (which is
>>>> apperently the same as RAID1+0 in industry), which 2 drives could fail
>>>> without loosing the array?
>>>>
>>>> This is what I have:
>>>>
>>>> Number Major Minor RaidDevice State
>>>> 0 8 5 0 active sync /dev/sda5
>>>> 1 8 21 1 active sync /dev/sdb5
>>>> 2 8 37 2 active sync /dev/sdc5
>>>> 3 8 53 3 active sync /dev/sdd5
>>>>
>>>> Thanks
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> Run
>>>
>>> man 4 md
>>>
>>> search for "RAID10"
>>>
>>> read what you find, and if it doesn't make sense, ask again.
>>> If it does make sense, post your answer and feel free to ask for
>>> confirmation.
>>>
>>>
>>> NeilBrown
>> Sorry, it still doesn't make much sense to me I'm afraid.
>>
>> In fact, it's confused me more - since I'm using "near", does that means
>> that the "copy" (I'm using near=2) of a given trunk may lie on the same
>> disk, leading to *no redundancy*??
> Clearly I need to improve the man page... (suggestions welcome).
>
> How do you read it that the copies of a given chunk may lie on the same disk.
> I read:
>
> When 'near' replicas are chosen, the multiple copies of a given chunk
> are laid out consecutively across the stripes of the array, so the two
> copies of a datablock will likely be at the same offset on two adjacent
> devices.
>
> "laid out consecutively across the stripes of the array" might be a bit
> obscure.. A stripe is one chunk on each device, so when chunks a laid out
> consecutively across a stripe, they would be one chunk per device.
>
> Then "likely be at the same offset on two adjacent devices" should make this
> clearer. It is only "likely" because if you have an odd number of devices,
> then the 2 copies of one chunk could be
> a/ at offset X on the last device
> b/ at offset X+chunk on the first device
>
> but in general, they are on "adjacent devices"
>
> So in answer to your original question, sda5 and sdb5 will have the same
> data, and sdc5 and sdd5 will also have the same data.
>
> NeilBrown
>
Thanks for your help, Neil :)
So, just to confirm 2 drives could fail in my array, as long as the two
drives weren't sda5 and sdb5, or sdc5 and sdd5. Is that correct?
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which Disks can fail?
am 23.06.2011 09:02:19 von NeilBrown
On Tue, 21 Jun 2011 16:31:49 +0100 Jonathan Tripathy
wrote:
> So, just to confirm 2 drives could fail in my array, as long as the two
> drives weren't sda5 and sdb5, or sdc5 and sdd5. Is that correct?
Correct.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html