Green drives and RAID arrays with partity
Green drives and RAID arrays with partity
am 25.09.2011 13:02:09 von lists
Hi guys.
I was wondering if it's safe to use so called green drives in RAID 5 or=
=20
RAID 6 with md?
Drive such as Seagate Barracuda=AE Green 2TB ST2000DL003 or Western=20
Digital Caviar=AE Green=99 2TB WD20EARX have TLER off and reading misc=20
forums I understand that it would be a good idea to buy drives with tha=
t=20
option available.
On the other hand some people claim that Linux/Solaris/BSD based=20
software RAID may be able to work just fine with these drives.
--=20
Marcin M. Jessa
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 25.09.2011 13:12:55 von mathias.buren
On 25 September 2011 12:02, Marcin M. Jessa wrote:
> Hi guys.
>
> I was wondering if it's safe to use so called green drives in RAID 5 =
or RAID
> 6 with md?
> Drive such as Seagate Barracuda® Green 2TB ST2000DL003 or Wester=
n Digital
> Caviar® Greenâ=A2 2TB WD20EARX have TLER off and reading mi=
sc forums I
> understand that it would be a good idea to buy drives with that optio=
n
> available.
> On the other hand some people claim that Linux/Solaris/BSD based soft=
ware
> RAID may be able to work just fine with these drives.
>
>
>
> --
>
> Marcin M. Jessa
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.ht=
ml
>
My 5x WD20EARS and 2x HD204UI (I think they're called? Samsung 2TB)
are "green drives" and they work fine in a RAID6 setup.
/M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 25.09.2011 13:43:36 von Stan Hoeppner
On 9/25/2011 6:02 AM, Marcin M. Jessa wrote:
> Hi guys.
>
> I was wondering if it's safe to use so called green drives in RAID 5 =
or
> RAID 6 with md?
> Drive such as Seagate Barracuda=AE Green 2TB ST2000DL003 or Western
> Digital Caviar=AE Green=99 2TB WD20EARX have TLER off and reading mis=
c
> forums I understand that it would be a good idea to buy drives with t=
hat
> option available.
> On the other hand some people claim that Linux/Solaris/BSD based
> software RAID may be able to work just fine with these drives.
That won't make a lick of difference if you attach them to a crappy=20
HBA/RAID card or mobo down chip, and/or with a buggy driver, or plug=20
them into a crappy backplane. Note the recent discussion of the=20
SuperMicro/Marvell HBA, mvsas driver problems.
When you have a problem such as yours, and you ask for help on this, or=
=20
any other Linux kernel list, it's a really good idea to post all of the=
=20
relevant information up front. Why? Because most often when drives=20
drop out of arrays it is not because a disk failed or the disk has bugg=
y=20
firmware. It's most often because of problems elsewhere in the storage=
=20
stack, either hardware or software.
Crappy HBAs and/or drivers, loose or dislodged cable connectors, and=20
crappy active/passive backplanes are the primary movers when it comes t=
o=20
good drives dropping out of arrays.
That said, assuming you have a good SAS/SATA ASIC/driver combo and=20
stable backplane, etc, I'd say buy the WD RE4 or Seagate Constellation=20
ES as they have 5 year warranties, TLER, all the good stuff for RAID=20
use. Which is why they are sold as "enterprise" drives and cost more=20
than consumer cheapos like the various Green "drives". Both the RE4 2T=
B=20
and Constellation ES 2TB are $200 each at Newegg. Unless you actually=20
*need* that much total space, I'd go with the 1TB models, paying ~half=20
the cost of the 2TB drives. The 1TB Constellation ES is $110. So with=
=20
5 drives you'll save almost $500 on drives, with 4TB usable space with=20
RAID5.
Again, I implore you to investigate all other portions of your storage=20
stack before blowing money on drives, which may not be the cause of you=
r=20
problem.
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 25.09.2011 16:28:16 von lists
On 9/25/11 1:43 PM, Stan Hoeppner wrote:
[...]
>
> When you have a problem such as yours, and you ask for help on this, or
> any other Linux kernel list, it's a really good idea to post all of the
> relevant information up front. Why? Because most often when drives drop
> out of arrays it is not because a disk failed or the disk has buggy
> firmware. It's most often because of problems elsewhere in the storage
> stack, either hardware or software.
I wasn't sure which information I should attach and I did not want to
spam the list. I was hoping someone would tell me if some of the
relevant information was missing so I could send it when needed.
Could you please tell me what kind of data was missing?
> Crappy HBAs and/or drivers, loose or dislodged cable connectors, and
> crappy active/passive backplanes are the primary movers when it comes to
> good drives dropping out of arrays.
In my case I don't use any HW Raid.
My motherboard is a MSI 870A-G54 -
http://www.msi.com/product/mb/870A-G54.html and I only use SATA and
software RAID.
> Again, I implore you to investigate all other portions of your storage
> stack before blowing money on drives, which may not be the cause of your
> problem.
It's really hard to find the source of the failure.
My first assumption was the drives, since I have 5 more (different) HDs
connected to the board, two in RAID 1 (ATA drives) and they all work
flawlessly.
Reading feedbacks from all the people complaining about the same issue
with the SEAGATE drives as I have I automatically assumed there is a
problems with these particular disks.
--
Marcin M. Jessa
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 25.09.2011 16:41:02 von Joe Landman
On 09/25/2011 07:02 AM, Marcin M. Jessa wrote:
> Hi guys.
>
> I was wondering if it's safe to use so called green drives in RAID 5 =
or
> RAID 6 with md?
You might need to turn off TLER and their power spin-down features, but=
=20
in general, they *should* work ok. Modulo any firmware issues (staring=
=20
intently at WD). We don't use Western Digital drives anymore due to ..=
.
er ... profound high failure rates we've observed in the field. These=20
may or may not be connected to "greenness". They are definitely=20
connected to buggy firmware (this is on enterprise and consumer drives,=
=20
used across a fairly wide swath of use cases).
Your mileage may vary (e.g. you might have different experiences). We=20
wouldn't recommend them.
> Drive such as Seagate Barracuda=AE Green 2TB ST2000DL003 or Western
> Digital Caviar=AE Green=99 2TB WD20EARX have TLER off and reading mis=
c
> forums I understand that it would be a good idea to buy drives with t=
hat
> option available.
TLER and the power spindown option are good things to be able to turn=20
off. Otherwise you need to have a way for the MD device to deal with a=
=20
spun-down set of drives. I am not sure support is there for this right=
=20
now ... I could be wrong, I've just not seen it.
This said, spinning up and down drives is actually more stressful for=20
the electronics and motor. Better to leave them in one state whenever=20
possible. I can't remember the study I saw on this, but about a year=20
ago I saw a quietly published correlation between drive failures and=20
number of spinup/spindown cycles. My memory might be off on it, so fee=
l=20
free to look for yourself (and don't rely upon my likely faulty memory)=
> On the other hand some people claim that Linux/Solaris/BSD based
> software RAID may be able to work just fine with these drives.
If the drives don't try to be "smart" and do "smart things" (powerdown,=
=20
unlimited Herculean error recovery, massive sector remapping that takes=
=20
it out of service for more than 60 seconds), yeah, they should "just=20
work". If you have a barrel full of these drives and need to use them,=
=20
by all means, just turn off the "smart" features.
OTOH, if you are buying new drives, steer clear of them.
--=20
Joseph Landman, Ph.D
=46ounder and CEO
Scalable Informatics, Inc.
email: landman@scalableinformatics.com
web : http://scalableinformatics.com
http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 25.09.2011 16:54:28 von Stan Hoeppner
On 9/25/2011 9:28 AM, Marcin M. Jessa wrote:
> On 9/25/11 1:43 PM, Stan Hoeppner wrote:
>
> [...]
>
>>
>> When you have a problem such as yours, and you ask for help on this, or
>> any other Linux kernel list, it's a really good idea to post all of the
>> relevant information up front. Why? Because most often when drives drop
>> out of arrays it is not because a disk failed or the disk has buggy
>> firmware. It's most often because of problems elsewhere in the storage
>> stack, either hardware or software.
>
> I wasn't sure which information I should attach and I did not want to
> spam the list. I was hoping someone would tell me if some of the
> relevant information was missing so I could send it when needed.
> Could you please tell me what kind of data was missing?
>
>> Crappy HBAs and/or drivers, loose or dislodged cable connectors, and
>> crappy active/passive backplanes are the primary movers when it comes to
>> good drives dropping out of arrays.
>
> In my case I don't use any HW Raid.
> My motherboard is a MSI 870A-G54 -
> http://www.msi.com/product/mb/870A-G54.html and I only use SATA and
> software RAID.
>
>> Again, I implore you to investigate all other portions of your storage
>> stack before blowing money on drives, which may not be the cause of your
>> problem.
>
> It's really hard to find the source of the failure.
> My first assumption was the drives, since I have 5 more (different) HDs
> connected to the board, two in RAID 1 (ATA drives) and they all work
> flawlessly.
> Reading feedbacks from all the people complaining about the same issue
> with the SEAGATE drives as I have I automatically assumed there is a
> problems with these particular disks.
Are the drives screwed into the case's internal drive cage? Directly
connected to the motherboard SATA ports with cables? Or, do you have
the drives mounted in any kind of SATA hot/cold swap cage? The cheap
ones of these are notorious for causing exactly the kind of drop outs
you've experienced. Post a link to your case and any drive related
peripherals.
Did you suffer a power event? I.e. a sag, brown out? Is the system
connected to a good quality working UPS?
Something else you should always mention: How long did it all "just
work" before having problems? A few hours? Days? Weeks? Months? Had
you made any hardware changes to the system recently before the failure
event? If so what? Did you upgrade your kernel/drivers recently, or
any software in the storage stack? Is the PSU flaky? How old is it? A
flaky PSU can drop drives out of arrays like hot potatoes when there is
heavy access and thus heavy current draw.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 26.09.2011 00:21:10 von lists
On 9/25/11 4:54 PM, Stan Hoeppner wrote:
> Are the drives screwed into the case's internal drive cage?
Yes.
>Directly
> connected to the motherboard SATA ports with cables?
Yes. I've 6 SATA3 ports on the motherboard and the drives are connected
directly.
>Or, do you have the
> drives mounted in any kind of SATA hot/cold swap cage? The cheap ones of
> these are notorious for causing exactly the kind of drop outs you've
> experienced. Post a link to your case and any drive related peripherals.
I don't have a hot/cold swap cage. This is my case:
http://www.fractal-design.com/?view=product&category=2&prod= 54
> Did you suffer a power event? I.e. a sag, brown out?
No, nothing like that.
>Is the system
> connected to a good quality working UPS?
It is connected to an UPS but not an expensive one.
> Something else you should always mention: How long did it all "just
> work" before having problems? A few hours? Days? Weeks? Months?
Two of the drives were falling out of the array pretty often.
My motherboard has a built in RAID controller which I do not use.
To begin with the BIOS settings were set to recognize drives as IDE
resulting in two of the drives connected to SATA 1 and SATA 2 ports were
failing and dropping off the array.
They would show as UDMA/100 drives whereas the other drives were showing as
SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ATA-8: ST2000DL003-9VT166, CC32, max UDMA/133
I changed this BIOS setting and all the drives have been recognized as
the same with the same speed.
I also bought new SATA cables for the failing drives, specifically for
SATA 3. That did not help and the drives kept on failing (maybe once a
week?). These 2 drives always failed at about the same time.
Shortly after a 3rd. drive failed leaving me with a broken Raid array.
>Had you
> made any hardware changes to the system recently before the failure
> event?
No, there were no changes.
If so what? Did you upgrade your kernel/drivers recently, or any
> software in the storage stack? Is the PSU flaky? How old is it? A flaky
> PSU can drop drives out of arrays like hot potatoes when there is heavy
> access and thus heavy current draw.
The PSU should be fine. I pulled it off a working server which had been
stable for long time.
--
Marcin M. Jessa
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 26.09.2011 02:15:10 von Stan Hoeppner
On 9/25/2011 5:21 PM, Marcin M. Jessa wrote:
> My motherboard has a built in RAID controller which I do not use.
What make/model is the mobo?
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 26.09.2011 02:18:47 von lists
On 9/26/11 2:15 AM, Stan Hoeppner wrote:
> On 9/25/2011 5:21 PM, Marcin M. Jessa wrote:
>
>> My motherboard has a built in RAID controller which I do not use.
>
> What make/model is the mobo?
My motherboard is a MSI 870A-G54 -
http://www.msi.com/product/mb/870A-G54.html
--
Marcin M. Jessa
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Green drives and RAID arrays with partity
am 26.09.2011 15:03:04 von Stan Hoeppner
On 9/25/2011 7:18 PM, Marcin M. Jessa wrote:
> On 9/26/11 2:15 AM, Stan Hoeppner wrote:
>> On 9/25/2011 5:21 PM, Marcin M. Jessa wrote:
>>
>>> My motherboard has a built in RAID controller which I do not use.
>>
>> What make/model is the mobo?
>
> My motherboard is a MSI 870A-G54 -
> http://www.msi.com/product/mb/870A-G54.html
The AMD SB850 is a relatively new Southbridge chip, SATA3 AHCI at that.
Are you using AHCI mode? I'd take a deep look in demsg output and in
/var/log/kern.log for any errors relating to SATA, AHCI, mdraid, etc.
Quite often when drives are kicked offline it is a result of timeouts
and/or other errors on the SATA channels. The log information may help
narrow down the cause, whether a Southbridge problem, driver problem,
mode setting problem, drive firmware problem, etc.
For instance, the SB850 has 6 SATA3 ports. If the drives are SATA2, and
something is buggy in firmware on either end, there could be a link
speed negotiation problem causing problems. Manually forcing the link
speed to SATA2 may help in such a situation. You may also want to try
using native SATA/IDE mode instead of AHCI if you're currently using
AHCI. If not, you may want to try AHCI mode.
Regardless, find the errors in your logs and post them to the linux-ide
list for help.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html