advice to low cost hardware raid (with mdadm)
advice to low cost hardware raid (with mdadm)
am 15.09.2010 22:07:27 von Pol Hallen
Hello all :-)
I think about a low cost raid 6 hardware (6 disks):
On the motherboard 3 pci controllers (sil3114
http://www.siliconimage.com/products/product.aspx?pid=28) cost for each
about 10/15euro
and 2 disks by controllers
So I've 6 disks (raid 6 with mdadm) and if a controller breaks raid 6
should be clean.
Is it a acceptable situation or I don't consider other unexpected?
PS: my lan doesn't need a high performance.
thanks
Pol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 15.09.2010 22:41:01 von Stan Hoeppner
Pol Hallen put forth on 9/15/2010 3:07 PM:
> Hello all :-)
>
> I think about a low cost raid 6 hardware (6 disks):
>
> On the motherboard 3 pci controllers (sil3114
> http://www.siliconimage.com/products/product.aspx?pid=28) cost for each
> about 10/15euro
>
> and 2 disks by controllers
>
> So I've 6 disks (raid 6 with mdadm) and if a controller breaks raid 6
> should be clean.
>
> Is it a acceptable situation or I don't consider other unexpected?
Is your goal strictly to build a RAID6 setup, or is this a means to an
end. If you're merely excited by the concept of RAID6 then this hardware
setup should be fine. With modern SATA drives, keep in mind that any
one of those six disks can nearly saturate the PCI bus. So with 6 disks
you're only getting about 1/6th of the performance of the drives, or
133MB/s maximum data rate.
Most mid range mobos come with 4-6 SATA ports these days. You'd be
better off overall, performance wise and money spent, if you used 4 mobo
SATA ports connected to the same SATA chip (some come with multiple SATA
chips--you want all drives connected to the same chip) and RAID5 instead
of 6. You'd save the cost of 2 drives and 3 PCI SATA cards, which would
be enough to pay for the new mobo/CPU/RAM. You'd have far better
performance for the same money. With four SATA drives on a new mobo
with an AHCI chip you'd see over 400 MB/s, about 4 times that of the PCI
6 drive solution. You'd have one drive less worth of capacity.
If I were you, I'd actually go with RAID 10 (1+0) over the 4 drives.
You only end up with 2 disks worth of capacity, but you'll get _much_
better performance, especially with writes. Additionally, in the event
of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
and a day. With RAID 10 drive rebuilds are typically many many times
faster.
Get yourself a new AHCI mobo with 4 SATA ports on one chip, 4 x 1TB or
2TB 7.2k WD Blue drives, and configure them as a md RAID10. You'll get
great performance, fast rebuild times, 1 or 2 TB of capacity, and the
ability to sustain up to two drive failures, as long as they are not
members of the same mirror set.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 15.09.2010 23:40:14 von Keld Simonsen
On Wed, Sep 15, 2010 at 03:41:01PM -0500, Stan Hoeppner wrote:
> Pol Hallen put forth on 9/15/2010 3:07 PM:
> > Hello all :-)
> >
> > I think about a low cost raid 6 hardware (6 disks):
> >
> > On the motherboard 3 pci controllers (sil3114
> > http://www.siliconimage.com/products/product.aspx?pid=28) cost for each
> > about 10/15euro
> >
> > and 2 disks by controllers
> >
> > So I've 6 disks (raid 6 with mdadm) and if a controller breaks raid 6
> > should be clean.
> >
> > Is it a acceptable situation or I don't consider other unexpected?
>
> Is your goal strictly to build a RAID6 setup, or is this a means to an
> end. If you're merely excited by the concept of RAID6 then this hardware
> setup should be fine. With modern SATA drives, keep in mind that any
> one of those six disks can nearly saturate the PCI bus. So with 6 disks
> you're only getting about 1/6th of the performance of the drives, or
> 133MB/s maximum data rate.
>
> Most mid range mobos come with 4-6 SATA ports these days. You'd be
> better off overall, performance wise and money spent, if you used 4 mobo
> SATA ports connected to the same SATA chip (some come with multiple SATA
> chips--you want all drives connected to the same chip) and RAID5 instead
> of 6. You'd save the cost of 2 drives and 3 PCI SATA cards, which would
> be enough to pay for the new mobo/CPU/RAM. You'd have far better
> performance for the same money. With four SATA drives on a new mobo
> with an AHCI chip you'd see over 400 MB/s, about 4 times that of the PCI
> 6 drive solution. You'd have one drive less worth of capacity.
>
> If I were you, I'd actually go with RAID 10 (1+0) over the 4 drives.
> You only end up with 2 disks worth of capacity, but you'll get _much_
> better performance, especially with writes. Additionally, in the event
> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
> and a day. With RAID 10 drive rebuilds are typically many many times
> faster.
>
> Get yourself a new AHCI mobo with 4 SATA ports on one chip, 4 x 1TB or
> 2TB 7.2k WD Blue drives, and configure them as a md RAID10. You'll get
> great performance, fast rebuild times, 1 or 2 TB of capacity, and the
> ability to sustain up to two drive failures, as long as they are not
> members of the same mirror set.
I concur with much of what Stan writes. If at all possible, use the
SATA ports on the motherboard. Or buy a new motherboard, some come with
8 SATA ports, for not a big extra cost. These ports are connected to the
south bridge often with 20 Tbit/s or more, while a controller on an
32 bit PCI only delivers 1 TBit.
For the RAID type, raid 5 and 6 do have good performance for sequential
read and write, while random access is mediocre. raid10 in the linux
sence (not raid1+0) gives good performance, almost
raid0i sequential read performance for raid10,f1
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 16.09.2010 00:03:11 von Pol Hallen
First all: sorry for my english :-P
>With four SATA drives on a new mobo
> with an AHCI chip you'd see over 400 MB/s, about 4 times that of the PCI
> 6 drive solution. You'd have one drive less worth of capacity.
400Mb/s is because the integrated controller of mobo reach that speed?
So is it a raid hardware (no need mdadm)? What happen if the controller
goes break?
>additionally, in the event
> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
> and a day.
a nightmare...
very thanks for your reasoning.. I don't have enought experience about
raid and friends!
Pol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 16.09.2010 00:25:02 von Stan Hoeppner
Keld J=F8rn Simonsen put forth on 9/15/2010 4:40 PM:
> These ports are connected to the
> south bridge often with 20 Tbit/s or more, while a controller on an
> 32 bit PCI only delivers 1 TBit.=20
Tbit/s for 32/33 PCI? I think you're a little high.. by a factor of
1000, Keld. :)
The chipset to processor interface on modern systems is typically aroun=
d
8-16 GB/s, or 64-128 Gb/s. That's still a factor of 10x lower than 1
Tbit/s, so you're high by a factor of 20x.
Why, may I ask, are you quoting serial data rates for parallel buses?
I've only seen inter-chip bandwidth quoted in serial rates on
communications gear. Everyone else quotes parallel data rates for boar=
d
level communication paths-- Bytes/s not bits/sec. You must work for
Ericsson. ;)
Regardless of bit rate values, we agree on the important part, for the
most part. :)
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 16.09.2010 01:56:29 von Stan Hoeppner
Pol Hallen put forth on 9/15/2010 5:03 PM:
> First all: sorry for my english :-P
>
>> With four SATA drives on a new mobo
>> with an AHCI chip you'd see over 400 MB/s, about 4 times that of the PCI
>> 6 drive solution. You'd have one drive less worth of capacity.
>
> 400Mb/s is because the integrated controller of mobo reach that speed?
And more. 4 x 7.2k RPM SATA drives will do ~400 MB/s. The Intel H55
mobo chipset has 6 (integrated) SATA2 ports for a total of 1800 MB/s.
The limitation in your initial example is the standard (32 bit/33 MHz)
PCI bus, which can only do 132 MB/s, and all PCI slots in the system
share that bandwidth. The more cards you add, the less bandwidth each
card gets. In you example, your 3 PCI SATA cards would only have 44
MB/s each, or 22 MB/s per drive. Each drive can do about 100 MB/s, so
you're strangling them to only 1/5th their potential. If you ever had
to do a rebuild of a RAID5/6 array with 6 1TB drives, it would take
_days_ to complete. Heck, the initial md array build would take days.
PCI Express x1 v1 cards can do 250 MB/s PER SLOT, x4 cards 1000 MB/s PER
SLOT, x8 cards 2000 MB/s PER SLOT, x16 cards 4000 MB/s. If you already
have two PCI Express x1 slots on your current mobo, you should simply
get two of these cards and connect two dives to each, and build a RAID10
or RAID5. This method produces no bottleneck as these cards can do 250
MB/s each, or 125 MB/s per drive:
http://www.sybausa.com/productInfo.php?iid=878
> So is it a raid hardware (no need mdadm)?
For real hardware RAID you will need to spend minimum USD $300 or so on
a PCIe card with 128MB of RAM and a RAID chip. Motherboards do NOT come
with real hardware RAID. They come with FakeRAID, which you do NOT want
to use. Use Linux mdraid instead. For someone strictly mirroring a
drive on a workstation to protect against drive failure, fakeRAID may be
an ok solution. Don't use it for anything else.
> What happen if the controller
> goes break?
For this to occur, the south bridge chip on your mobo will have failed.
If it fails, your whole mobo has failed. It can happen, but how often?
Buy a decent quality mobo--Intel, SuperMicro, Asus, ECS, GigaByte,
Biostar, etc--and you don't have to worry about it.
>> additionally, in the event
>> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
>> and a day.
>
> a nightmare...
Yes, indeed. Again, if you use 3 regular PCI cards, it will take
_FOREVER_ to rebuild the array. If you use a new mobo with SATA ports
or PCIe x1 cards, the rebuild will be much much faster. Don't get me
wrong, rebuilding an mdraid array of 6x1TB disks will still take a
while, but it will take at least 5-6 times longer using regular PCI SATA
cards.
> very thanks for your reasoning.. I don't have enought experience about
> raid and friends!
You're very welcome. Glad I was able to help a bit.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 16.09.2010 14:05:14 von Keld Simonsen
On Wed, Sep 15, 2010 at 05:25:02PM -0500, Stan Hoeppner wrote:
> Keld J=F8rn Simonsen put forth on 9/15/2010 4:40 PM:
> > These ports are connected to the
> > south bridge often with 20 Tbit/s or more, while a controller on an
> > 32 bit PCI only delivers 1 TBit.=20
>=20
> Tbit/s for 32/33 PCI? I think you're a little high.. by a factor of
> 1000, Keld. :)
yes, you are right. I meant Gbit/s
> Why, may I ask, are you quoting serial data rates for parallel buses?
> I've only seen inter-chip bandwidth quoted in serial rates on
> communications gear. Everyone else quotes parallel data rates for bo=
ard
> level communication paths-- Bytes/s not bits/sec. You must work for
> Ericsson. ;)
because I have seen specs given in bit/s. Eg southbridge speeds, and=20
speeds of PCIe (2,5 Gbit/s per lane). I agree that people probably wan=
t
speeds in MB/s.
> Regardless of bit rate values, we agree on the important part, for th=
e
> most part. :)
I think so, too.
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 17.09.2010 00:41:29 von Michal Soltys
On 10-09-16 00:03, Pol Hallen wrote:
>
>>additionally, in the event
>> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
>> and a day.
>
> a nightmare...
>
> very thanks for your reasoning.. I don't have enought experience about
> raid and friends!
>
> Pol
One remark - write intent bitmaps will make it perfectly fine (orders of
magnitude faster). I'm not sure how the feature looks from performance
point of view these days though.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 17.09.2010 02:42:08 von John Robinson
On 16/09/2010 23:41, Michal Soltys wrote:
> On 10-09-16 00:03, Pol Hallen wrote:
>>
>>> additionally, in the event
>>> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
>>> and a day.
>>
>> a nightmare...
>>
>> very thanks for your reasoning.. I don't have enought experience about
>> raid and friends!
>>
>> Pol
>
> One remark - write intent bitmaps will make it perfectly fine (orders of
> magnitude faster). I'm not sure how the feature looks from performance
> point of view these days though.
It should be configured with a sensible block size; the default block
size is usually too small and spoils performance. I chose a 16MB write
intent bitmap block size after experimenting a while ago (have a look
for posts on the subject from me), on the basis that larger gave
diminishing returns for write performance and smaller (nearer the
default) impacted badly on write performance, but others have gone as
big as 128MB (again see the archives), but the default, while it depends
on the array size and metadata version, often damages write performance
for only a small benefit in recovery time.
In short, the default write intent bitmap block size works but is liable
to be suboptimal so you should consider tuning it rather than take the
default.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: advice to low cost hardware raid (with mdadm)
am 17.09.2010 06:38:59 von Stan Hoeppner
Michal Soltys put forth on 9/16/2010 5:41 PM:
> On 10-09-16 00:03, Pol Hallen wrote:
>>
>>> additionally, in the event
>>> of a disk failure, rebuilding a 6x1TB RAID5/6 array will take forever
>>> and a day.
>>
>> a nightmare...
>>
>> very thanks for your reasoning.. I don't have enought experience about
>> raid and friends!
>>
>> Pol
>
> One remark - write intent bitmaps will make it perfectly fine (orders of
> magnitude faster). I'm not sure how the feature looks from performance
> point of view these days though.
My "forever and a day" RAID6 build/rebuild comment may have been taken
out of context. The original post was inquiring about running a qty 6
SATA disk md RAID6 array on a standard 132 MB/s PCI bus, using three 2
port PCI SATA cards on the single PCI bus. My comments were focused on
the PCI bottleneck in such a setup causing dreadful array performance.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html