Use of WD20EARS with MDADM

Use of WD20EARS with MDADM

am 25.03.2010 17:20:17 von andrew dunn

Recently Dell is selling their WD20EARS (2TB) for 90$

This being one of the best PPG (Price Per Gigabyte) snatches I have
seen, I was considering buying 8 of them for an mdadm array.

Has anyone had experience with these drives?

Note, they have 4K sectors.

Thank you for your kind responses.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 25.03.2010 18:01:55 von Asdo

Andrew Dunn wrote:
> Recently Dell is selling their WD20EARS (2TB) for 90$
>
> This being one of the best PPG (Price Per Gigabyte) snatches I have
> seen, I was considering buying 8 of them for an mdadm array.
>
> Has anyone had experience with these drives?
>
> Note, they have 4K sectors.
>
> Thank you for your kind responses.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

Might not have ERC / TLER settable, see the other WD drive (WD360GD) here:
http://forums.storagereview.com/index.php/topic/28333-tler-c ctl/
however only 1 WD drive is listed (WD360GD), no info is there
specifically on WD20EARS

Good luck

Update that list if you buy it :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 25.03.2010 18:10:39 von David Lethe

On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:

> Recently Dell is selling their WD20EARS (2TB) for 90$
>
> This being one of the best PPG (Price Per Gigabyte) snatches I have
> seen, I was considering buying 8 of them for an mdadm array.
>
> Has anyone had experience with these drives?
>
> Note, they have 4K sectors.
>
> Thank you for your kind responses.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

This is a low-cost consumer disk that is rated for a whole 2400 hours
use in a year. You get what you pay for.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 25.03.2010 18:45:17 von Asdo

David Lethe wrote:
> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>
>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>
>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>> seen, I was considering buying 8 of them for an mdadm array.
>>
>> Has anyone had experience with these drives?
>>
>> Note, they have 4K sectors.
>>
>> Thank you for your kind responses.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> This is a low-cost consumer disk that is rated for a whole 2400 hours
> use in a year. You get what you pay for.

I think the "I" in RAID stands for inexpensive :-P
Jokes apart, don't you think frequent scrubbing (e.g. weekly for a
raid-6 or 2/week for raid-1/10) is enough to compensate for this?
I would be way more concerned with ERC / TLER if it proves to be
nonsettable, but I might be wrong
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 25.03.2010 18:58:21 von Mark Knecht

On Thu, Mar 25, 2010 at 10:01 AM, Asdo wrote:
> Andrew Dunn wrote:
>>
>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>
>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>> seen, I was considering buying 8 of them for an mdadm array.
>>
>> Has anyone had experience with these drives?
>>
>> Note, they have 4K sectors.
>>
>> Thank you for your kind responses.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.h=
tml
>>
>
> Might not have ERC / TLER settable, see the other WD drive (WD360GD) =
here:
> http://forums.storagereview.com/index.php/topic/28333-tler-c ctl/
> however only 1 WD drive is listed (WD360GD), no info is there specifi=
cally
> on WD20EARS
>
> Good luck
>
> Update that list if you buy it :-)

I'm wondering where the guy in the link you provided got the command:

smartctl -l scterc d:

Assuming a Linux shift to

smartctl -l scterc /dev/sda

I get an error message:

gandalf ~ # smartctl -l scterc /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-pc-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.=
net

=======3D> INVALID ARGUMENT TO -l: scterc
=======3D> VALID ARGUMENTS ARE: error, selftest, selective,
directory[,g|s], background, scttemp[sts|hist], sasphy[,reset],
sataphy[,reset], gplog,N[,RANGE], smartlog,N[,RANGE],
xerror[,N][,error], xselftest[,N][,selftest] <=======3D

Use smartctl -h to get a usage summary

gandalf ~ #

Google has a few more links of folks using that command, but less than =
1 page.


I have 6 WD10EARS drives that I'm going to try out, possibly this
weekend, for the first time. We'll see how it goes. I'm a newbie with
RAID and LVM. My requirements are, for the most part, very modest in
terms of speed and lifetime. The RAID is just for basic backup
protection of other stuff.

Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 25.03.2010 21:23:27 von John Robinson

On 25/03/2010 17:58, Mark Knecht wrote:
[...]
> I'm wondering where the guy in the link you provided got the command:
>
> smartctl -l scterc d:

In smartmontools SVN.

Cheers,

John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 11:45:29 von Asdo

Mark Knecht wrote:
> On Thu, Mar 25, 2010 at 10:01 AM, Asdo wrote:
>
>> Might not have ERC / TLER settable, see the other WD drive (WD360GD) here:
>> http://forums.storagereview.com/index.php/topic/28333-tler-c ctl/
>> however only 1 WD drive is listed (WD360GD), no info is there specifically
>> on WD20EARS
>>
>> Good luck
>>
>> Update that list if you buy it :-)
>>
>
> I'm wondering where the guy in the link you provided got the command:
>
> smartctl -l scterc d:
>
> Assuming a Linux shift to
>
> smartctl -l scterc /dev/sda
>
> I get an error message:
>
The feature will be added in version 5.40. Or get if from SVN
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 21:27:17 von Peter Kieser

On 3/25/2010 9:20 AM, Andrew Dunn wrote:
> Recently Dell is selling their WD20EARS (2TB) for 90$
>
> This being one of the best PPG (Price Per Gigabyte) snatches I have
> seen, I was considering buying 8 of them for an mdadm array.
>
> Has anyone had experience with these drives?
>
> Note, they have 4K sectors.
>
> Thank you for your kind responses.
>
>
Hello,

The 4096-byte sector drives work fine with mdadm. The main problem you
are going to run into with the WDC Green drives is their 8 second "idle"
setting. After 8 seconds, by default, the drive parks its heads. This
can lead to an amazingly high Load Cycle Count (LLC) after just a month
of operation due to the fact that most disk access happens around that
time causing the drive to park and unpark in repeated cycles.

To fix this, find a utility called wdidle3 (I have it, if you are unable
to locate it) and set the idle timeout on the drives to 300 seconds.
These drives do not support TLER, there is no ability to set it via
firmware anymore - WDC removed this ability sometime last year.

-Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 21:59:27 von Mark Knecht

On Fri, Mar 26, 2010 at 1:27 PM, Peter Kieser wrote:

> Hello,
>
> The 4096-byte sector drives work fine with mdadm. The main problem you are
> going to run into with the WDC Green drives is their 8 second "idle"
> setting. After 8 seconds, by default, the drive parks its heads. This can
> lead to an amazingly high Load Cycle Count (LLC) after just a month of
> operation due to the fact that most disk access happens around that time
> causing the drive to park and unpark in repeated cycles.
>
> To fix this, find a utility called wdidle3 (I have it, if you are unable to
> locate it) and set the idle timeout on the drives to 300 seconds. These
> drives do not support TLER, there is no ability to set it via firmware
> anymore - WDC removed this ability sometime last year.
>
> -Peter

This seems to be a windows program? I don't see a Linux version in
Gentoo portage.

I could run Windows once to set it if the settings are then
maintained, or do you have a Linux solution?

- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 22:01:22 von Peter Kieser

On 3/26/2010 1:59 PM, Mark Knecht wrote:
> On Fri, Mar 26, 2010 at 1:27 PM, Peter Kieser wrote:
>
>
>>
>>
> This seems to be a windows program? I don't see a Linux version in
> Gentoo portage.
>
> I could run Windows once to set it if the settings are then
> maintained, or do you have a Linux solution?
>
> - Mark
>

It's a proprietary DOS application written by WDC that changes the
settings on the hard drives firmware. The changes are permanent. You can
boot off a USB key or floppy with DOS on it to run the application.

-Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 22:06:08 von Mark Knecht

On Fri, Mar 26, 2010 at 2:01 PM, Peter Kieser wrote:
> On 3/26/2010 1:59 PM, Mark Knecht wrote:
>>
>> On Fri, Mar 26, 2010 at 1:27 PM, Peter Kieser  =
wrote:
>>
>>
>>>
>>>
>>>
>>
>> This seems to be a windows program? I don't see a Linux version in
>> Gentoo portage.
>>
>> I could run Windows once to set it if the settings are then
>> maintained, or do you have a Linux solution?
>>
>> - Mark
>>
>
> It's a proprietary DOS application written by WDC that changes the se=
ttings
> on the hard drives firmware. The changes are permanent. You can boot =
off a
> USB key or floppy with DOS on it to run the application.
>
> -Peter

OK, so possibly FreeDOS will work?

Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 22:16:33 von Mark Knecht

On Fri, Mar 26, 2010 at 2:06 PM, Mark Knecht wro=
te:
> On Fri, Mar 26, 2010 at 2:01 PM, Peter Kieser wrote=
:
>> On 3/26/2010 1:59 PM, Mark Knecht wrote:
>>>
>>> On Fri, Mar 26, 2010 at 1:27 PM, Peter Kieser  =
wrote:
>>>
>>>
>>>>
>>>>
>>>>
>>>
>>> This seems to be a windows program? I don't see a Linux version in
>>> Gentoo portage.
>>>
>>> I could run Windows once to set it if the settings are then
>>> maintained, or do you have a Linux solution?
>>>
>>> - Mark
>>>
>>
>> It's a proprietary DOS application written by WDC that changes the s=
ettings
>> on the hard drives firmware. The changes are permanent. You can boot=
off a
>> USB key or floppy with DOS on it to run the application.
>>
>> -Peter
>
> OK, so possibly FreeDOS will work?
>
> Thanks,
> Mark
>
Here's some data from a WD10EARS. How do I read the LCC_COUNT?


SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 129 128 021 Pre-fail
Always - 6525
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 19
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 099 099 000 Old_age
Always - 1053
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 18
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 5
193 Load_Cycle_Count 0x0032 191 191 000 Old_age
Always - 27557
194 Temperature_Celsius 0x0022 121 116 000 Old_age
Always - 26
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0


Is it currently at 191 with a fail level of 27557?

This drive is in a desktop machine - non-RAID - that's pretty much
always turned on and is maybe 4-6 weeks old at this point.

Thanks,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 22:19:39 von Richard Scobie

Mark Knecht wrote:

> 193 Load_Cycle_Count 0x0032 191 191 000 Old_age
> Always - 27557

>
> Is it currently at 191 with a fail level of 27557?

No, its item 193 Load_Cycle_Count (LCC) and it's at 27557.

Regards,

Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 22:45:43 von Matt Garman

On Fri, Mar 26, 2010 at 01:27:17PM -0700, Peter Kieser wrote:
> The 4096-byte sector drives work fine with mdadm. The main problem
> you are going to run into with the WDC Green drives is their 8
> second "idle" setting. After 8 seconds, by default, the drive
> parks its heads. This can lead to an amazingly high Load Cycle
> Count (LLC) after just a month of operation due to the fact that
> most disk access happens around that time causing the drive to
> park and unpark in repeated cycles.

For what it's worth... that head parking feature actually saves a
measurable amount of power. On my system, with four drives, there
is about a 10 watt (AC) difference between all four heads parked and
all four heads non-parked. This finding is consistent with the
power numbers suggested by SilentPCReview's reviews of these drives.

> To fix this, find a utility called wdidle3 (I have it, if you are
> unable to locate it) and set the idle timeout on the drives to 300
> seconds. These drives do not support TLER, there is no ability to
> set it via firmware anymore - WDC removed this ability sometime
> last year.

This will prevent the rapidly increasing Load Cycle Count SMART
counter. However, in my opinion, it also removes a useful
power-saving feature of these drives. In other words, my system is
mostly idle; I want the heads to be parked the majority of the time.
Instead, without the wdidle3 hack, they constantly park/unpark
despite an otherwise idle system.

Here's the problem I've been unable to solve: if my system is truly
idle for, say 10 minutes, then why don't my heads stay parked for 10
minutes? It appears that the heads will park, then five minutes
later, *something mysterious* will cause them to unpark.

I did some experimenting with this several months ago. See the
list archives for August 20, 2009, subject "linux disk access when
idle"[1].

As far as I can tell, I disabled every single daemon on my system,
but still could not get the heads to park for more than five
minutes.

My point in all this is: I'd rather tune my software (Linux) to work
better with my hardware, rather than remove what I consider to be a
useful power-saving feature. I haven't re-visisted this in a while,
but last time I tried, I couldn't find the guilty daemon or kernel
setting responsible for the constant head un-parking.

[1] http://marc.info/?l=linux-raid&m=125078611926294&w=2

-Matt

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 23:38:14 von Mark Knecht

On Fri, Mar 26, 2010 at 2:19 PM, Richard Scobie w=
rote:
> Mark Knecht wrote:
>
>> 193 Load_Cycle_Count        0x0032   191 =C2=
=A0 191   000    Old_age
>> Always       -       27557
>
>>
>> Is it currently at 191 with a fail level of 27557?
>
> No, its item 193 Load_Cycle_Count (LCC) and it's at 27557.
>
> Regards,
>
> Richard
>

Thanks. I don't know if that's high or low. To me it's just a number.

=46ollowing along from Matt Garman's reply I went back and looked at
when I built this machine for my dad. (How many here have 85 year old
fathers who have run Linux for 7 years instead of windows? Give him a
cheer!) ;-) I agree with Matt about trying to save power.

This machine is 40 days old today, at least counting since he first
booted it at home. It's been up and running since then.

40 Days is 3,456,000 seconds.

3,456,000 / 27557 is 125 seconds/count increase on average. So about 2 =
minutes.

I wonder if I'm seeing something like Matt is seeing with the drive
continually being woken up. I know most of the time it just sits idle
from a user's perspective, but I've done nothing to stop daemons or
anything like that. It runs screen savers, gets used every day for an
hour or two, but is always powered up so that I can administer it from
350 miles away.

Interesting. I'll watch it every so often and see if it's increasing
in a linear fashion, as I might guess Matt's data would suggest, or
doing something different. Since I sent data a while ago it's
increased to 27594. (+37) That was roughly an hour ago and my dad
hasn't been logged on during this time.

- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 23:47:59 von Peter Kieser

On 3/26/2010 3:38 PM, Mark Knecht wrote:
> I wonder if I'm seeing something like Matt is seeing with the drive
> continually being woken up. I know most of the time it just sits idle
> from a user's perspective, but I've done nothing to stop daemons or
> anything like that. It runs screen savers, gets used every day for an
> hour or two, but is always powered up so that I can administer it from
> 350 miles away.
>

I originally noticed the problem after having the drives for 4 weeks.
They LCC was at around 27,000 on each drive until I changed the timeout
on the drives to 300 seconds.

-Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 23:50:18 von Matt Garman

On Fri, Mar 26, 2010 at 5:38 PM, Mark Knecht wrote:
> I wonder if I'm seeing something like Matt is seeing with the drive
> continually being woken up. I know most of the time it just sits idle
> from a user's perspective, but I've done nothing to stop daemons or
> anything like that. It runs screen savers, gets used every day for an
> hour or two, but is always powered up so that I can administer it from
> 350 miles away.

In my case, I want to emphasize the following: my WD Green drives are
strictly a data store. The system runs from a compact flash card.
Though I disabled them anyway, many daemons, such as syslog, sshd,
cron, fetchmail, etc, should only affect the *system* drive.

However, daemons like nfs and smbd can obviously affect the data
store. Even so, I wouldn't expect them to cause a disk access unless
a request is made.

The point is, in my opinion, a non-system partition should be that
much easier to make "truly" idle... still, I can't figure out how to
do it.

-Matt
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 26.03.2010 23:51:54 von Peter Kieser

On 3/26/2010 3:50 PM, Matt Garman wrote:
> The point is, in my opinion, a non-system partition should be that
> much easier to make "truly" idle... still, I can't figure out how to
> do it.
>
> -Matt
>

I must also note, that I was seeing the drives stop and start even when
there *was* activity on the disks.

-Peter
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 27.03.2010 00:01:30 von David Rees

On Fri, Mar 26, 2010 at 3:38 PM, Mark Knecht wrote:
> On Fri, Mar 26, 2010 at 2:19 PM, Richard Scobie wrote:
>> No, its item 193 Load_Cycle_Count (LCC) and it's at 27557.
>
> Thanks. I don't know if that's high or low. To me it's just a number.
>
> Following along from Matt Garman's reply I went back and looked at
> when I built this machine for my dad. (How many here have 85 year old
> fathers who have run Linux for 7 years instead of windows? Give him a
> cheer!) ;-) I agree with Matt about trying to save power.
>
> This machine is 40 days old today, at least counting since he first
> booted it at home. It's been up and running since then.
>
> 40 Days is 3,456,000 seconds.
>
> 3,456,000 / 27557 is 125 seconds/count increase on average. So about 2 minutes.
>
> I wonder if I'm seeing something like Matt is seeing with the drive
> continually being woken up. I know most of the time it just sits idle
> from a user's perspective, but I've done nothing to stop daemons or
> anything like that. It runs screen savers, gets used every day for an
> hour or two, but is always powered up so that I can administer it from
> 350 miles away.

Why leave the thing on 24 hours a day if it's only used 1-2 of them?
Save wear on your drive and the rest of the machine, save your dad a
few bucks on his electricity bill and shut the thing down when it's
not used. Then use WoL to wake it up if you need to admin it, or
heck, just use the BIOS/ACPI wakeup feature and put the thing to sleep
at night and wake up automatically in the morning - at least then
you'll cut it's run-time in half.

-Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 28.03.2010 00:31:42 von Mark Knecht

On Fri, Mar 26, 2010 at 4:01 PM, David Rees wrote:
> On Fri, Mar 26, 2010 at 3:38 PM, Mark Knecht w=
rote:
>> On Fri, Mar 26, 2010 at 2:19 PM, Richard Scobie > wrote:
>>> No, its item 193 Load_Cycle_Count (LCC) and it's at 27557.
>>
>> Thanks. I don't know if that's high or low. To me it's just a number=

>>
>> Following along from Matt Garman's reply I went back and looked at
>> when I built this machine for my dad. (How many here have 85 year ol=
d
>> fathers who have run Linux for 7 years instead of windows? Give him =
a
>> cheer!) ;-) I agree with Matt about trying to save power.
>>
>> This machine is 40 days old today, at least counting since he first
>> booted it at home. It's been up and running since then.
>>
>> 40 Days is 3,456,000 seconds.
>>
>> 3,456,000 / 27557 is 125 seconds/count increase on average. So about=
2 minutes.
>>
>> I wonder if I'm seeing something like Matt is seeing with the drive
>> continually being woken up. I know most of the time it just sits idl=
e
>> from a user's perspective, but I've done nothing to stop daemons or
>> anything like that. It runs screen savers, gets used every day for a=
n
>> hour or two, but is always powered up so that I can administer it fr=
om
>> 350 miles away.
>
> Why leave the thing on 24 hours a day if it's only used 1-2 of them?
> Save wear on your drive and the rest of the machine, save your dad a
> few bucks on his electricity bill and shut the thing down when it's
> not used.  Then use WoL to wake it up if you need to admin it, o=
r
> heck, just use the BIOS/ACPI wakeup feature and put the thing to slee=
p
> at night and wake up automatically in the morning - at least then
> you'll cut it's run-time in half.
>
> -Dave
>

That's really up to him. He doesn't hear well and doesn't seem to care
about his utility bill so it's easy for me, or has been in the past.
His last machine ran 5 1/2 years powered up all the time. I think the
alternate view is that keeping the drive at a relatively constant
temperature is a good thing also. Who knows?

As for WoL I don't know much about it but his machine is main target
logging in through his router and script kiddies are for every trying
to log in with name after name after name. If it could be make to
respond to my IP only that would be great but my IP moves around now
and then so how do I make that work?
(NOT a topic for this list...)

Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 28.03.2010 18:19:17 von st0ff

Am 25.03.2010 18:45, schrieb Asdo:
> David Lethe wrote:
>> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>>
>>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>>
>>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>>> seen, I was considering buying 8 of them for an mdadm array.
>>>
>>> Has anyone had experience with these drives?
>>>
>>> Note, they have 4K sectors.
>>>
>>> Thank you for your kind responses.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe
>>> linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>> This is a low-cost consumer disk that is rated for a whole 2400 hours
>> use in a year. You get what you pay for.
>
> I think the "I" in RAID stands for inexpensive :-P
> Jokes apart, don't you think frequent scrubbing (e.g. weekly for a
> raid-6 or 2/week for raid-1/10) is enough to compensate for this?
> I would be way more concerned with ERC / TLER if it proves to be
> nonsettable, but I might be wrong

Just for those who like to read: ERC is a feature stated in the
ATA-specifications, that's why the "quick hack" for smartctl was so
easy. But in the specs it is also noted, that the change a
SCT-ERC-Command does will not survive a power cycle.

So y'all will probably need some udev-triggered daemon, which runs upon
connecting a disk, checking if this disk is conforming to the ata-spec,
then finding out if it's part of a raid array. If all answers yield
"yes", it should call SCT-ERC with a configurable value, and probably
some other "configure the drive"-commands (for tuning the idle timeout
of wdXXears).

I'll have to think about that as a project again ...

All the best,
Stefan

> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

WD20EARS data

am 29.03.2010 18:59:36 von unknown

Hi @all,

I tested the WD20EARS at work today. They correctly report NO SCT ERC
ability (unlike WDxxEADS from November 2009 on). Trying the commands
nevertheless results in "command aborted".

So my recommendation would be: if you need a fast large array and want
to use WD, keep enough redundancy and a cold spare. (Respectively
because I returned 2 drives to Western Digital today, too. They had
reallocated and pending sectors >100 and already dropped out of RAIDs.)

Or use Hitachi deskstar HDS722020ALA330 - these can set ERC timeouts and
they cost about the same. We have sold about 10 times as many Hitachis
compared to WDs, but had about the same amount of defective drives.

All the best,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: WD20EARS data

am 29.03.2010 19:13:27 von unknown

Am 29.03.2010 18:59, schrieb Stefan /*St0fF*/ Hübner:
> Hi @all,
>=20
> I tested the WD20EARS at work today. They correctly report NO SCT ER=
C
> ability (unlike WDxxEADS from November 2009 on). Trying the commands
> nevertheless results in "command aborted".
>=20
> So my recommendation would be: if you need a fast large array and wan=
t
> to use WD, keep enough redundancy and a cold spare. (Respectively
> because I returned 2 drives to Western Digital today, too. They had
> reallocated and pending sectors >100 and already dropped out of RAIDs=
)
>=20
> Or use Hitachi deskstar HDS722020ALA330 - these can set ERC timeouts =
and
> they cost about the same. We have sold about 10 times as many Hitach=
is
> compared to WDs, but had about the same amount of defective drives.

I'm sorry, the comparison is with the elder WD20EADS! Sorry...

>=20
> All the best,
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 14.04.2010 21:53:43 von Bill Davidsen

Stefan *St0fF* Huebner wrote:
> Am 25.03.2010 18:45, schrieb Asdo:
>
>> David Lethe wrote:
>>
>>> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>>>
>>>
>>>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>>>
>>>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>>>> seen, I was considering buying 8 of them for an mdadm array.
>>>>
>>>> Has anyone had experience with these drives?
>>>>
>>>> Note, they have 4K sectors.
>>>>
>>>> Thank you for your kind responses.
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe
>>>> linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>> This is a low-cost consumer disk that is rated for a whole 2400 hours
>>> use in a year. You get what you pay for.
>>>
>> I think the "I" in RAID stands for inexpensive :-P
>> Jokes apart, don't you think frequent scrubbing (e.g. weekly for a
>> raid-6 or 2/week for raid-1/10) is enough to compensate for this?
>> I would be way more concerned with ERC / TLER if it proves to be
>> nonsettable, but I might be wrong
>>
>
> Just for those who like to read: ERC is a feature stated in the
> ATA-specifications, that's why the "quick hack" for smartctl was so
> easy. But in the specs it is also noted, that the change a
> SCT-ERC-Command does will not survive a power cycle.
>
> So y'all will probably need some udev-triggered daemon, which runs upon
> connecting a disk, checking if this disk is conforming to the ata-spec,
> then finding out if it's part of a raid array. If all answers yield
> "yes", it should call SCT-ERC with a configurable value, and probably
> some other "configure the drive"-commands (for tuning the idle timeout
> of wdXXears).
>
> I'll have to think about that as a project again ...
>

I would think it was easier to just add a small section to rc.local,
unless you antucipate lots of errors during boot and after udev gets going.

--
Bill Davidsen
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 19.04.2010 16:17:15 von Phillip Susi

On 4/14/2010 3:53 PM, Bill Davidsen wrote:
> Stefan *St0fF* Huebner wrote:
>> Am 25.03.2010 18:45, schrieb Asdo:
>>
>>> David Lethe wrote:
>>>
>>>> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>>>>
>>>>
>>>>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>>>>
>>>>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>>>>> seen, I was considering buying 8 of them for an mdadm array.
>>>>>
>>>>> Has anyone had experience with these drives?

I believe that these disks only come in the "green" variety. I recently
picked up a 1.5 tb version for testing and cheap bulk storage, and I
would not suggest using them in a raid array because the green drives
firmware automatically parks the head after 8 seconds of inactivity and
reduces the rpm of the disk. The constant parking can quickly wear out
the head under high use and there is no way to disable this "feature".
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 15:20:25 von Bill Davidsen

Phillip Susi wrote:
> On 4/14/2010 3:53 PM, Bill Davidsen wrote:
>
>> Stefan *St0fF* Huebner wrote:
>>
>>> Am 25.03.2010 18:45, schrieb Asdo:
>>>
>>>
>>>> David Lethe wrote:
>>>>
>>>>
>>>>> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>>>>>
>>>>>
>>>>>
>>>>>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>>>>>
>>>>>> This being one of the best PPG (Price Per Gigabyte) snatches I have
>>>>>> seen, I was considering buying 8 of them for an mdadm array.
>>>>>>
>>>>>> Has anyone had experience with these drives?
>>>>>>
>
> I believe that these disks only come in the "green" variety. I recently
> picked up a 1.5 tb version for testing and cheap bulk storage, and I
> would not suggest using them in a raid array because the green drives
> firmware automatically parks the head after 8 seconds of inactivity and
> reduces the rpm of the disk. The constant parking can quickly wear out
> the head under high use and there is no way to disable this "feature".
>

I hear this said, but I don't have any data to back it up. Drive vendors
aren't stupid, so if the parking feature is likely to cause premature
failures under warranty, I would expect that the feature would not be
there, or that the drive would be made more robust. Maybe I have too
much faith in greed as a design goal, but I have to wonder if load
cycles are as destructive as seems to be the assumption.

I'd love to find some real data, anecdotal stories about older drives
are not overly helpful. Clearly there is a trade-off between energy
saving, response, and durability, I just don't have any data from a
large population of new (green) drives.

--
Bill Davidsen
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 15:45:38 von Tim Small

Bill Davidsen wrote:
>> I believe that these disks only come in the "green" variety. I recently
>> picked up a 1.5 tb version for testing and cheap bulk storage, and I
>> would not suggest using them in a raid array because the green drives
>> firmware automatically parks the head after 8 seconds of inactivity and
>> reduces the rpm of the disk. The constant parking can quickly wear out
>> the head under high use and there is no way to disable this "feature".
>>
>
> I hear this said, but I don't have any data to back it up. Drive
> vendors aren't stupid, so if the parking feature is likely to cause
> premature failures under warranty, I would expect that the feature
> would not be there, or that the drive would be made more robust. Maybe
> I have too much faith in greed as a design goal, but I have to wonder
> if load cycles are as destructive as seems to be the assumption.

I've used the 500G, and 2TB WD consumer green drives in md arrays, not
many - about 10 in total. The older 500G drives (like many WD 2.5"
drives) did frequent head unload/reloads under Linux due to interactions
with the default timings of the Linux block layer, I believe. This can
be fixed using WD's wdidle3.exe under DOS. You can monitor this using
smartctl - look at the raw value for unloads, but it shouldn't be an
issue with newer drives.

Both the 2TB and 500G drives seem to lock up and need resetting by Linux
(this happens automatically) if you poll the SMART status e.g. you run
smartd, or munin+smartctl etc. (the 2TB drives that I've had also seem
to occasionally lock up under other workloads, but you might be able to
live with this) - the SMART thing might be a firmware bug but WDC say
"contact your Linux vendor" - they only provide support for Windows - I
must get around to polling one of them for SMART under Windows XP for
comparison.

If you're going to be trying to use a load of them in a RAID, then you
need to be careful about vibration damping - they may not cope that well
with vibration (check SMART high-fly-writes and hardware-ecc-recovered
raw values if available). They are also not rated for 24/7 operation, I
believe.

Tim.

--
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53 http://seoss.co.uk/ +44-(0)1273-808309

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 16:32:44 von Mikael Abrahamsson

On Wed, 21 Apr 2010, Bill Davidsen wrote:

> I hear this said, but I don't have any data to back it up. Drive vendors
> aren't stupid, so if the parking feature is likely to cause premature
> failures under warranty, I would expect that the feature would not be there,
> or that the drive would be made more robust. Maybe I have too much faith in
> greed as a design goal, but I have to wonder if load cycles are as
> destructive as seems to be the assumption.

What I think people are worried about is that a drive might have X
load/unload cycles in the data sheet (300k or 600k seem to be normal
figures) and reaching this in 1-2 years of "normal" (according to the
user who is running it 24/7) might be worrying (and understandably so).

Otoh these drives seem to be designed for desktop 8 hour per day use, so
running them as a 24/7 fileserver under linux is not what they were
designed for. I have no idea what will happen when the load/unload cycles
goes over the data sheet number, but my guess is that it was put there for
a reason.

> I'd love to find some real data, anecdotal stories about older drives are not
> overly helpful. Clearly there is a trade-off between energy saving, response,
> and durability, I just don't have any data from a large population of new
> (green) drives.

My personal experience from the WD20EADS drives is that around 40% of
them failed within the first year of operation. This is not from a large
population of drives though and wasn't due to load/unload cycles. I had no
problem getting them replaced under warranty, but I'm running RAID6
nowadays :P

--
Mikael Abrahamsson email: swmike@swm.pp.se
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 17:14:04 von Phillip Susi

On 4/21/2010 9:20 AM, Bill Davidsen wrote:
> I hear this said, but I don't have any data to back it up. Drive vendors
> aren't stupid, so if the parking feature is likely to cause premature
> failures under warranty, I would expect that the feature would not be
> there, or that the drive would be made more robust. Maybe I have too
> much faith in greed as a design goal, but I have to wonder if load
> cycles are as destructive as seems to be the assumption.

Indeed, I think you have too much faith in people doing sensible things.
Especially when their average customer isn't placing the drive in a
high use environment and they know it, and suggest against doing so.

> I'd love to find some real data, anecdotal stories about older drives
> are not overly helpful. Clearly there is a trade-off between energy
> saving, response, and durability, I just don't have any data from a
> large population of new (green) drives.

I've not seen any anecdotal stories, but I have seen plenty of reports
with real data showing a large number of head unloads from the SMART
data after a relatively short period of use. Personally mine has a few
hundred so far and I have not even used it for real storage yet, only
testing. The specifications say it's good for 300,000 cycles, so do the
math... getting 5 unloads per minute would lead to probable failure
after 41 days. Granted that is about worst case, but still something to
watch out for. In order to make it the entire 3 year warranty period,
you need to stay under 11.4 unloads per hour. If you have very little
IO activity, or VERY MUCH, then this is entirely possible, but more
moderate loads in the middle have been observed to cause hundreds of
unloads per hour.

Given that, and the fact that WD themselves have stated that you should
not use these drives in a raid array, I'd either stay away, or watch out
for this problem and try to take action to avoid and monitor it.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 17:52:17 von Simon Matthews

On Wed, Apr 21, 2010 at 6:20 AM, Bill Davidsen wrote=
:
> Phillip Susi wrote:
>>
>> On 4/14/2010 3:53 PM, Bill Davidsen wrote:
>>
>>>
>>> Stefan *St0fF* Huebner wrote:
>>>
>>>>
>>>> Am 25.03.2010 18:45, schrieb Asdo:
>>>>
>>>>>
>>>>> David Lethe wrote:
>>>>>
>>>>>>
>>>>>> On Mar 25, 2010, at 11:20 AM, Andrew Dunn wrote:
>>>>>>
>>>>>>>
>>>>>>> Recently Dell is selling their WD20EARS (2TB) for 90$
>>>>>>>
>>>>>>> This being one of the best PPG (Price Per Gigabyte) snatches I =
have
>>>>>>> seen, I was considering buying 8 of them for an mdadm array.
>>>>>>>
>>>>>>> Has anyone had experience with these drives?
>>>>>>>
>>
>> I believe that these disks only come in the "green" variety. =A0I re=
cently
>> picked up a 1.5 tb version for testing and cheap bulk storage, and I
>> would not suggest using them in a raid array because the green drive=
s
>> firmware automatically parks the head after 8 seconds of inactivity =
and
>> reduces the rpm of the disk. =A0The constant parking can quickly wea=
r out
>> the head under high use and there is no way to disable this "feature=
".
>>
>
> I hear this said, but I don't have any data to back it up. Drive vend=
ors
> aren't stupid, so if the parking feature is likely to cause premature
> failures under warranty, I would expect that the feature would not be=
there,
> or that the drive would be made more robust. Maybe I have too much fa=
ith in
> greed as a design goal, but I have to wonder if load cycles are as
> destructive as seems to be the assumption.
>
> I'd love to find some real data, anecdotal stories about older drives=
are
> not overly helpful. Clearly there is a trade-off between energy savin=
g,
> response, and durability, I just don't have any data from a large pop=
ulation
> of new (green) drives.


It's not hard data, but there was discussion of a similar issue fours
years ago related to load/unload cycles on laptops under Ubuntu:
https://bugs.launchpad.net/ubuntu/+source/acpi-support/+bug/ 59695

Simon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 18:42:04 von Bill Davidsen

Phillip Susi wrote:
> On 4/21/2010 9:20 AM, Bill Davidsen wrote:
>
>> I hear this said, but I don't have any data to back it up. Drive vendors
>> aren't stupid, so if the parking feature is likely to cause premature
>> failures under warranty, I would expect that the feature would not be
>> there, or that the drive would be made more robust. Maybe I have too
>> much faith in greed as a design goal, but I have to wonder if load
>> cycles are as destructive as seems to be the assumption.
>>
>
> Indeed, I think you have too much faith in people doing sensible things.
> Especially when their average customer isn't placing the drive in a
> high use environment and they know it, and suggest against doing so.
>
>
>> I'd love to find some real data, anecdotal stories about older drives
>> are not overly helpful. Clearly there is a trade-off between energy
>> saving, response, and durability, I just don't have any data from a
>> large population of new (green) drives.
>>
>
> I've not seen any anecdotal stories, but I have seen plenty of reports
> with real data showing a large number of head unloads from the SMART
> data after a relatively short period of use. Personally mine has a few
> hundred so far and I have not even used it for real storage yet, only
> testing. The specifications say it's good for 300,000 cycles, so do the
> math... getting 5 unloads per minute would lead to probable failure
> after 41 days. Granted that is about worst case, but still something to
> watch out for. In order to make it the entire 3 year warranty period,
> you need to stay under 11.4 unloads per hour. If you have very little
> IO activity, or VERY MUCH, then this is entirely possible, but more
> moderate loads in the middle have been observed to cause hundreds of
> unloads per hour.
>
> Given that, and the fact that WD themselves have stated that you should
> not use these drives in a raid array, I'd either stay away, or watch out
> for this problem and try to take action to avoid and monitor it.
>

Part of this is my feeling that no one really knows if the drive fails
after N loads, because even if WD could set the unload time down, the
cycle takes time to happen, so I would bet that they are taking an
educated guess. The other part is that there are lots of clerical tasks
which would hit the drive, under Windows, single drive, 3-5 times a
minute. Data entry comes to mind, customer support, print servers, etc.
Granted that these are probably 7x5 hours a week, but I'm think 2/min,
7hr/day, 200day/yr... 168k/yr, and that's not the worst case.

Having run lots of drives (some TB of 73GB 15k rpm LVD320) MTTF is
interesting, because the curve has spikes at the front from infant
mortality, and at the end from old age, but it was damn quiet in the
middle. I'd love to see the data on these, not because I'm going to run
them but just to keep current, so when someone calls me and says they
got a great deal on green drives, I'll know what to tell them.

--
Bill Davidsen
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 19:36:07 von Mark Knecht

On Wed, Apr 21, 2010 at 8:14 AM, Phillip Susi wrote:

> I've not seen any anecdotal stories, but I have seen plenty of report=
s
> with real data showing a large number of head unloads from the SMART
> data after a relatively short period of use.  Personally mine ha=
s a few
> hundred so far and I have not even used it for real storage yet, only
> testing.  The specifications say it's good for 300,000 cycles, s=
o do the
> math... getting 5 unloads per minute would lead to probable failure
> after 41 days.  Granted that is about worst case, but still some=
thing to
> watch out for.  In order to make it the entire 3 year warranty p=
eriod,
> you need to stay under 11.4 unloads per hour.  If you have very =
little
> IO activity, or VERY MUCH, then this is entirely possible, but more
> moderate loads in the middle have been observed to cause hundreds of
> unloads per hour.
>
> Given that, and the fact that WD themselves have stated that you shou=
ld
> not use these drives in a raid array, I'd either stay away, or watch =
out
> for this problem and try to take action to avoid and monitor it.

I think I reported this earlier, but here is a WD10EARS drive in a
standard Gentoo Linux desktop machine. The drive has 1661 hours
powered up and 43508 load cycles. That's 26/hour and works out to
about 14 months before it will be out of spec at 300K cycles.

This machine only gets a few hours of use a day but is generally
powered up all the time. I don't know why Linux wakes this drive up
roughly every two minutes, assuming it's Linux and not the drive
itself or something on the motherboard, but it does.

I tried three of these drives in a RAID1 in another machine and they
simply didn't work. Went off line over and over, with big wait times
when they were working. I made no other changes expect changing to a
WD 500GB RAID Edition drive and get almostno load cycle counts at all.

- Mark

gandalf ~ # smartctl -i /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-pc-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.=
net

===3D START OF INFORMATION SECTION ===3D
Device Model: WDC WD10EARS-00Y5B1
Serial Number: WD-WCAV55464493
=46irmware Version: 80.00A80
User Capacity: 1,000,204,886,016 bytes
Device is: Not in smartctl database [for details use: -P showall=
]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Wed Apr 21 10:35:32 2010 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

gandalf ~ #
gandalf ~ # smartctl -A /dev/sda
smartctl 5.39.1 2010-01-28 r3054 [x86_64-pc-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.=
net

===3D START OF READ SMART DATA SECTION ===3D
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail
Always - 0
3 Spin_Up_Time 0x0027 129 128 021 Pre-fail
Always - 6525
4 Start_Stop_Count 0x0032 100 100 000 Old_age
Always - 21
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail
Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age
Always - 0
9 Power_On_Hours 0x0032 098 098 000 Old_age
Always - 1661
10 Spin_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age
Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age
Always - 20
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age
Always - 5
193 Load_Cycle_Count 0x0032 186 186 000 Old_age
Always - 43508
194 Temperature_Celsius 0x0022 121 116 000 Old_age
Always - 26
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age
Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age
Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age
Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age
Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age
Offline - 0

gandalf ~ #
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 20:40:19 von Tim Small

Mark Knecht wrote:
> This machine only gets a few hours of use a day but is generally
> powered up all the time. I don't know why Linux wakes this drive up
> roughly every two minutes, assuming it's Linux and not the drive
> itself or something on the motherboard, but it does.
>


This is documented here:

https://ata.wiki.kernel.org/index.php/Known_issues#Drives_wh ich_perform_frequent_head_unloads_under_Linux

If you want to fix it, then wdidle3.exe worked for me. Search for:

wdidle3_1_00.zip


Tim.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 21:01:41 von Mark Knecht

On Wed, Apr 21, 2010 at 11:40 AM, Tim Small wrot=
e:
> Mark Knecht wrote:
>> This machine only gets a few hours of use a day but is generally
>> powered up all the time. I don't know why Linux wakes this drive up
>> roughly every two minutes, assuming it's Linux and not the drive
>> itself or something on the motherboard, but it does.
>>
>
>
> This is documented here:
>
> https://ata.wiki.kernel.org/index.php/Known_issues#Drives_wh ich_perfo=
rm_frequent_head_unloads_under_Linux
>
> If you want to fix it, then wdidle3.exe worked for me.  Search f=
or:
>
> wdidle3_1_00.zip
>
>
> Tim.
>

Yeah, that's been reported here before, but how does someone run this
Windows program on a remote machine that boots only Linux? Even if it
was a DOS executable the machine has no floppy. I presume you are
dual-boot with Windows? If so maybe I'll install Windows the next time
I visit.

I presume I cannot run this in a VM?

It's unfortunate that this drive doesn't respond to APM commands either=


- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 21:24:57 von Richard Scobie

Bill Davidsen wrote:

> I'd love to find some real data, anecdotal stories about older drives
> are not overly helpful. Clearly there is a trade-off between energy
> saving, response, and durability, I just don't have any data from a
> large population of new (green) drives.

I can add some very limited, (6 drives) observations on the 2GB RE
version - WD2002FYPS.

These are green drives and have been running continuously in an md RAID5
for eight months without any problems and no drive settings have been
altered. This array is idle except for a nightly rsync session for an
hour or so.

Looking at the SMART data, load cycle count, is 23862 which sounds like
a lot, but the current VALUE entry is 193 (and started at 200 8 months
ago) and the THRESHOLD value is 0.

So, if I am interpreting this trend correctly, the THRESHOLD value will
be reached in about 682,000 load cycles and about 228 months.

Regards,

Richard


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 21:31:27 von Clinton Lee Taylor

Greetings ...

On 21 April 2010 21:01, Mark Knecht wrote:
> On Wed, Apr 21, 2010 at 11:40 AM, Tim Small wr=
ote:
>> Mark Knecht wrote:
>>> This machine only gets a few hours of use a day but is generally
>>> powered up all the time. I don't know why Linux wakes this drive up
>>> roughly every two minutes, assuming it's Linux and not the drive
>>> itself or something on the motherboard, but it does.
>>>
>>
>>
>> This is documented here:
>>
>> https://ata.wiki.kernel.org/index.php/Known_issues#Drives_wh ich_perf=
orm_frequent_head_unloads_under_Linux
>>
>> If you want to fix it, then wdidle3.exe worked for me. =A0Search for=
:
>>
>> wdidle3_1_00.zip
>>
>>
>> Tim.
>>
>
> Yeah, that's been reported here before, but how does someone run this
> Windows program on a remote machine that boots only Linux? Even if it
> was a DOS executable the machine has no floppy. I presume you are
> dual-boot with Windows? If so maybe I'll install Windows the next tim=
e
> I visit.
Why don't you download FreeDOS boot floppy image. Add the DOS
program to the image. Copy image to your boot partition, add an entry
in your Boot Manager to boot the image using some line syslinux image
booter.

On CentOS, install syslinux package using "yum install syslinux"
Copy memdisk to your boot partition

=46rom the www.freedos.org web site, download floppy image in
http://www.freedos.org/freedos/files/ ...

wget -vb http://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/dist ri=
butions/1.0/fdboot.img

Mount the downloaded image file with "mount fdboot.img /mnt/floppy/ -o =
rw,loop"

Copy DOS program into the floppy image.

Add to your grub.conf file ...

title FreeDOS - Boot Image
kernel memdisk
initrd fdboot.img

Now reboot your system that does not have a floppy drive off the
=46reeDOS image and run the DOS program to make changes. The only thin=
g
I can think of that might be a problem, is that you might have the
drive attached to a SATA port that is not supported by the DOS program
or something like that.

This is what I use to do BIOS and other firmware updates, that Linux
does not have a tool for.

Mailed
LeeT

P.S. Sorry for the Off Topic post, but I hope it helps.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 21:33:01 von Phillip Susi

On 4/21/2010 3:01 PM, Mark Knecht wrote:
> It's unfortunate that this drive doesn't respond to APM commands either.

Yes it is.... would be nice if they made it respond to the standard APM
command meant to configure this kind of behavior instead of creating a
proprietary command and dos utility to invoke it. It's also a shame the
drive lies about its physical sector size and has no way to turn that
off. Might be a nice project to reverse engineer this utility on
windows to figure out the command it sends down and add it to hdparm.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 22:36:13 von Mark Knecht

On Wed, Apr 21, 2010 at 12:33 PM, Phillip Susi wrote=
:
> On 4/21/2010 3:01 PM, Mark Knecht wrote:
>> It's unfortunate that this drive doesn't respond to APM commands eit=
her.
>
> Yes it is.... would be nice if they made it respond to the standard A=
PM
> command meant to configure this kind of behavior instead of creating =
a
> proprietary command and dos utility to invoke it.  It's also a s=
hame the
> drive lies about its physical sector size and has no way to turn that
> off.  Might be a nice project to reverse engineer this utility o=
n
> windows to figure out the command it sends down and add it to hdparm.

I thought the same thing so I worked with the hdparm developer (Mark
Lord) and gave him data two months back. There was a command he had me
run (hdparm --istdout /dev/sda) that extracts firmware tables or
something and he spent time looking at the tables only to decide that
WD simply isn't identifying anywhere in the tables that the drive is
4K/ physical sector. With that it doesn't seem that hdparm could ever
do anything automatic that would be safe.

- Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 21.04.2010 23:58:23 von Berkey B Walker

Mikael Abrahamsson wrote:
> On Wed, 21 Apr 2010, Bill Davidsen wrote:
>
>> I hear this said, but I don't have any data to back it up. Drive
>> vendors aren't stupid, so if the parking feature is likely to cause
>> premature failures under warranty, I would expect that the feature
>> would not be there, or that the drive would be made more robust.
>> Maybe I have too much faith in greed as a design goal, but I have to
>> wonder if load cycles are as destructive as seems to be the assumption.
>
> What I think people are worried about is that a drive might have X
> load/unload cycles in the data sheet (300k or 600k seem to be normal
> figures) and reaching this in 1-2 years of "normal" (according to the
> user who is running it 24/7) might be worrying (and understandably so).
>
> Otoh these drives seem to be designed for desktop 8 hour per day use,
> so running them as a 24/7 fileserver under linux is not what they were
> designed for. I have no idea what will happen when the load/unload
> cycles goes over the data sheet number, but my guess is that it was
> put there for a reason.
>
>> I'd love to find some real data, anecdotal stories about older drives
>> are not overly helpful. Clearly there is a trade-off between energy
>> saving, response, and durability, I just don't have any data from a
>> large population of new (green) drives.
>
> My personal experience from the WD20EADS drives is that around 40% of
> them failed within the first year of operation. This is not from a
> large population of drives though and wasn't due to load/unload
> cycles. I had no problem getting them replaced under warranty, but I'm
> running RAID6 nowadays :P
>
Sorry, you sound like a factory droid. *I* see no reason for early
failure besides cheap mat'ls in construction. Were these assertations
of short life to be true, I would campaign against the drive maker. (I
think that they are just normalizing failure rate against warranty
claims) Buy good stuff. I *wish* I could define the term by mfg. It
seems Seagate, & WD don't hack it. The Japanese drives did, but since
the $ dropped -

One thing seemingly missed is the relationship between storage density
and drive temp.variations.. Hard drive mfgs are going to be in deep
doodoo when the SSD folks get price/perf in the lead lane. This year, I
predict. And maybe another 2 for long term reliability to be in the lead..

I believe that many [most?] RAID users are looking for results (long
term archival) that are not intended in the design.We are about 2
generations away from that being a reality - I think.. For other users,
I would suggest a mirror machine. with both machines being scrubbed
daily, and media being dissimilar in mfg and mfg date.

I can't wait until Neil gets to (has to) play/work with the coming tech.
Neat things are coming.

b-

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 22.04.2010 02:51:02 von Steven Haigh

+1 Insightful.

This is a gem of a tip - so much so that I've archived the text of this=
as
I KNOW it will come in handy soon.

It may be been semi-off topic, but it is worth its weight in gold!

Thankyou!

On Wed, 21 Apr 2010 21:31:27 +0200, Clinton Lee Taylor
wrote:
> Greetings ...
>=20
> On 21 April 2010 21:01, Mark Knecht wrote:
>> On Wed, Apr 21, 2010 at 11:40 AM, Tim Small
wrote:
>>> Mark Knecht wrote:
>>>> This machine only gets a few hours of use a day but is generally
>>>> powered up all the time. I don't know why Linux wakes this drive u=
p
>>>> roughly every two minutes, assuming it's Linux and not the drive
>>>> itself or something on the motherboard, but it does.
>>>>
>>>
>>>
>>> This is documented here:
>>>
>>>
https://ata.wiki.kernel.org/index.php/Known_issues#Drives_wh ich_perform=
_frequent_head_unloads_under_Linux
>>>
>>> If you want to fix it, then wdidle3.exe worked for me.  Search=
for:
>>>
>>> wdidle3_1_00.zip
>>>
>>>
>>> Tim.
>>>
>>
>> Yeah, that's been reported here before, but how does someone run thi=
s
>> Windows program on a remote machine that boots only Linux? Even if i=
t
>> was a DOS executable the machine has no floppy. I presume you are
>> dual-boot with Windows? If so maybe I'll install Windows the next ti=
me
>> I visit.
> Why don't you download FreeDOS boot floppy image. Add the DOS
> program to the image. Copy image to your boot partition, add an entry
> in your Boot Manager to boot the image using some line syslinux image
> booter.
>=20
> On CentOS, install syslinux package using "yum install syslinux"
> Copy memdisk to your boot partition
>=20
> From the www.freedos.org web site, download floppy image in
> http://www.freedos.org/freedos/files/ ...
>=20
> wget -vb
>
http://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/dist ributions/1=
0/fdboot.img
>=20
> Mount the downloaded image file with "mount fdboot.img /mnt/floppy/ -=
o
> rw,loop"
>=20
> Copy DOS program into the floppy image.
>=20
> Add to your grub.conf file ...
>=20
> title FreeDOS - Boot Image
> kernel memdisk
> initrd fdboot.img
>=20
> Now reboot your system that does not have a floppy drive off the
> FreeDOS image and run the DOS program to make changes. The only thin=
g
> I can think of that might be a problem, is that you might have the
> drive attached to a SATA port that is not supported by the DOS progra=
m
> or something like that.
>=20
> This is what I use to do BIOS and other firmware updates, that Linux
> does not have a tool for.
>=20
> Mailed
> LeeT
>=20
> P.S. Sorry for the Off Topic post, but I hope it helps.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--=20
Steven Haigh
=20
Email: netwiz@crc.id.au
Web: http://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
=46ax: (03) 8338 0299
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

wdidle3

am 22.04.2010 13:40:37 von Tim Small

Mark Knecht wrote:

>> [wdidle3.exe]...

> Yeah, that's been reported here before, but how does someone run this
> Windows program on a remote machine that boots only Linux? Even if it
> was a DOS executable the machine has no floppy.

It is a DOS executable. As previously mentioned, you can use:

USB floppy emulation (if the BIOS supports it)
USB HD emulation
Booting from an HD partition (e.g. temporarily break RAID for one of the
swap partitons, and install DOS in that partition).
syslinux + memdisk + image file
pxelinux + memdisk + image file

I use the latter for using DOS bios upgrades etc. remotely using IPMI+SoL.

If anyone want to try and reverse-engineer wdidle3, it appears to have
been compiled with Watcom (now open source), and a free DOS extender and
then packed (again using an open tool). You can run it under the Watcom
disassembler / debugger (which is packaged with FreeDOS).

With all this open source software rattling around Western Digital, it'd
be nice if they just did a bloody Linux version (or at least documented
the vendor-specific commands which they use).

Tim.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 22.04.2010 18:13:50 von Khelben Blackstaff

> I believe that these disks only come in the "green" variety. I recently
> picked up a 1.5 tb version for testing and cheap bulk storage, and I
> would not suggest using them in a raid array because the green drives
> firmware automatically parks the head after 8 seconds of inactivity and
> reduces the rpm of the disk. The constant parking can quickly wear out
> the head under high use and there is no way to disable this "feature".

As previously mentioned wdidle utility can disable the head unloading.

> The specifications say it's good for 300,000 cycles, so do the
> math... getting 5 unloads per minute would lead to probable failure
> after 41 days. Granted that is about worst case, but still something to
> watch out for. In order to make it the entire 3 year warranty period,
> you need to stay under 11.4 unloads per hour. If you have very little
> IO activity, or VERY MUCH, then this is entirely possible, but more
> moderate loads in the middle have been observed to cause hundreds of
> unloads per hour.

WD mentions in the customer help (Answer ID 5357) that these newer drives
were validated to 1M load/unload cycles and not 300K.

>
> If you want to fix it, then wdidle3.exe worked for me. Search for:
>
> wdidle3_1_00.zip
>
It worked fine for me too. I had a Load Cycle Count of 59, 15 mins
after connecting the drive for the first time. After running wdidle
the Count stopped at 104 and never increased again. I now have 11 hours
With wdidle 1.00 disabling the timer did nothing. I had to set the timer to
a large value. There is a newer version though (1.03) that supports
these new drives better.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 22.04.2010 20:16:38 von Simon Matthews

On Thu, Apr 22, 2010 at 9:13 AM, Khelben Blackstaff
wrote:
>> I believe that these disks only come in the "green" variety. =A0I re=
cently
>> picked up a 1.5 tb version for testing and cheap bulk storage, and I
>> would not suggest using them in a raid array because the green drive=
s
>> firmware automatically parks the head after 8 seconds of inactivity =
and
>> reduces the rpm of the disk. =A0The constant parking can quickly wea=
r out
>> the head under high use and there is no way to disable this "feature=
".
>
> As previously mentioned wdidle utility can disable the head unloading=

>
>> The specifications say it's good for 300,000 cycles, so do the
>> math... getting 5 unloads per minute would lead to probable failure
>> after 41 days. =A0Granted that is about worst case, but still someth=
ing to
>> watch out for. =A0In order to make it the entire 3 year warranty per=
iod,
>> you need to stay under 11.4 unloads per hour. =A0If you have very li=
ttle
>> IO activity, or VERY MUCH, then this is entirely possible, but more
>> moderate loads in the middle have been observed to cause hundreds of
>> unloads per hour.
>
> WD mentions in the customer help (Answer ID 5357) that these newer dr=
ives
> were validated to 1M load/unload cycles and not 300K.
>
>>
>> If you want to fix it, then wdidle3.exe worked for me. =A0Search for=
:
>>
>> wdidle3_1_00.zip
>>

I just looked at a WD green drive that was in a RAID1 set for several
months. The drive is an older model -- WD10EADS, but I think similar.
I did not use the widdle utility and the S.M.A.R.T. data reports a
load cycle count of 42. The RAID set held user directories, so it was
either under constant access (daytime) or not at all (nighttime, apart
from during backups).

Simon
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 22.04.2010 21:44:22 von Phillip Susi

On 4/22/2010 12:13 PM, Khelben Blackstaff wrote:
> WD mentions in the customer help (Answer ID 5357) that these newer drives
> were validated to 1M load/unload cycles and not 300K.

Weird, I wonder if they just forgot to update the spec sheet for the
drive then since it says 300k.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 23.04.2010 01:23:22 von Mark Knecht

On Thu, Apr 22, 2010 at 12:44 PM, Phillip Susi wrote:
> On 4/22/2010 12:13 PM, Khelben Blackstaff wrote:
>> WD mentions in the customer help (Answer ID 5357) that these newer drives
>> were validated to 1M load/unload cycles and not 300K.
>
> Weird, I wonder if they just forgot to update the spec sheet for the
> drive then since it says 300k.

Validated doesn't mean spec'ed, right? So they spec it at 300K but
they test and find that some units go to 1M, so they report that some
units go to 1M but they keep the spec at 300K. We feel more
comfortable with the operation but if the drive fails at 400K then
they are within their rights to say the drive lived a useful life and
replace or not replace it as they see fit.

I have 7 of these WD10EARS green drives, one in the machine I reported
on that's 350 miles away, but 6 sitting in boxes here. I think I'll
look into using the WD program to modify one or two of them and then
start some testing.

Keep in mind that just because we possibly solve the load cycle count
problem doesn't mean that the drive will work for RAID. WD has also
stated that these drives don't have any TLER features.

I'll report back how that goes, but I don't know exactly when.

Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 23.04.2010 02:03:05 von Richard Scobie

Mark Knecht wrote:

> Keep in mind that just because we possibly solve the load cycle count
> problem doesn't mean that the drive will work for RAID. WD has also
> stated that these drives don't have any TLER features.

I completely agree and wonder if some have unrealistic expectations.

The WD20EARS is listed as a desktop drive and is cheap - $/GB.

The RE-GP, (if you need the green features, otherwise RE series),
version WD2002FYPS cost more and has the following extra features over
the WD20EARS:

RAFF - vibration handling
TLER - RAID optimised error handling
An MTBF spec and the comment

"Each drive is put through extended burn-in testing with thermal cycling
to ensure reliable operation."

Which presumably helps weed out infant mortality cases. It is also
interesting to note that while all the RE series drives have a spindle
support bearing at both ends, only the 2TB desktop drive does.

In light of all the above, you get what you pay for and need to adjust
expectations accordingly.

Regards,

Richard

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 23.04.2010 03:29:29 von Mark Knecht

On Thu, Apr 22, 2010 at 5:03 PM, Richard Scobie wrote:
> Mark Knecht wrote:
>
>> Keep in mind that just because we possibly solve the load cycle count
>> problem doesn't mean that the drive will work for RAID. WD has also
>> stated that these drives don't have any TLER features.
>
> I completely agree and wonder if some have unrealistic expectations.
>
> The WD20EARS is listed as a desktop drive and is cheap - $/GB.
>
> The RE-GP, (if you need the green features, otherwise RE series), version
> WD2002FYPS cost more and has the following extra features over the WD20EARS:
>
> RAFF - vibration handling
> TLER - RAID optimised error handling
> An MTBF spec and the comment
>
> "Each drive is put through extended burn-in testing with thermal cycling to
> ensure reliable operation."
>
> Which presumably helps weed out infant mortality cases. It is also
> interesting to note that while all the RE series drives have a spindle
> support bearing at both ends, only the 2TB desktop drive does.
>
> In light of all the above, you get what you pay for and need to adjust
> expectations accordingly.
>
> Regards,
>
> Richard



If drives always worked there wouldn't be much reason for RAID1, would
there? ;-)

Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 23.04.2010 05:44:27 von Phillip Susi

On Thu, 2010-04-22 at 16:23 -0700, Mark Knecht wrote:
> Validated doesn't mean spec'ed, right? So they spec it at 300K but
> they test and find that some units go to 1M, so they report that some
> units go to 1M but they keep the spec at 300K. We feel more
> comfortable with the operation but if the drive fails at 400K then
> they are within their rights to say the drive lived a useful life and
> replace or not replace it as they see fit.

Specifications are based on validation; they don't have anything to do
with the warranty. If one drive made it to a million, who cares? The
question is will the majority of them? Just because one drive made it
to a million hours before failing does not mean a million hours is the
MTBF, nor does a drive failing at half MTBF have anything to do with
whether it is still under warranty or not. The warranty is expressed in
real time duration, not hours of operation or head unload count.

The question is, if you have a dozen drives in a raid array, is there a
good chance that none will fail after they hit 300,000 unloads. If the
answer to that is no, and it looks like they will reach 300,000 unloads
in a few months, then you probably don't want to use those drives.

> Keep in mind that just because we possibly solve the load cycle count
> problem doesn't mean that the drive will work for RAID. WD has also
> stated that these drives don't have any TLER features.

TLER sounds like marketing department BS to me. ANY drive should be
able to retry a read plenty of times and give up in less than 7 seconds
without some special feature. I've never used a drive with this
"feature" and never had mdadm kick a drive offline just because it
developed a bad sector. Usually I notice from a smart check and force a
write to the sector and the drive remaps it.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 23.04.2010 05:49:12 von Phillip Susi

On Fri, 2010-04-23 at 12:03 +1200, Richard Scobie wrote:
> "Each drive is put through extended burn-in testing with thermal cycling
> to ensure reliable operation."
>
> Which presumably helps weed out infant mortality cases. It is also
> interesting to note that while all the RE series drives have a spindle
> support bearing at both ends, only the 2TB desktop drive does.

Indeed. Pretty standard manufacturing process to weed out the infant
mortality. Interesting to find out that they don't bother with these
drives. That might explain why my first one died after ~24 hours of
operation. What still pisses me off is that it died so hard my bios
wouldn't even post with the thing plugged in, let alone let me read any
SMART data from it.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 03.05.2010 00:33:18 von Bill Davidsen

Berkey B Walker wrote:
>
>
> Mikael Abrahamsson wrote:
>> On Wed, 21 Apr 2010, Bill Davidsen wrote:
>>
>>> I hear this said, but I don't have any data to back it up. Drive
>>> vendors aren't stupid, so if the parking feature is likely to cause
>>> premature failures under warranty, I would expect that the feature
>>> would not be there, or that the drive would be made more robust.
>>> Maybe I have too much faith in greed as a design goal, but I have to
>>> wonder if load cycles are as destructive as seems to be the assumption.
>>
>> What I think people are worried about is that a drive might have X
>> load/unload cycles in the data sheet (300k or 600k seem to be normal
>> figures) and reaching this in 1-2 years of "normal" (according to the
>> user who is running it 24/7) might be worrying (and understandably so).
>>
>> Otoh these drives seem to be designed for desktop 8 hour per day use,
>> so running them as a 24/7 fileserver under linux is not what they
>> were designed for. I have no idea what will happen when the
>> load/unload cycles goes over the data sheet number, but my guess is
>> that it was put there for a reason.
>>
>>> I'd love to find some real data, anecdotal stories about older
>>> drives are not overly helpful. Clearly there is a trade-off between
>>> energy saving, response, and durability, I just don't have any data
>>> from a large population of new (green) drives.
>>
>> My personal experience from the WD20EADS drives is that around 40% of
>> them failed within the first year of operation. This is not from a
>> large population of drives though and wasn't due to load/unload
>> cycles. I had no problem getting them replaced under warranty, but
>> I'm running RAID6 nowadays :P
>>
> Sorry, you sound like a factory droid. *I* see no reason for early
> failure besides cheap mat'ls in construction. Were these assertations
> of short life to be true, I would campaign against the drive maker.
> (I think that they are just normalizing failure rate against warranty
> claims) Buy good stuff. I *wish* I could define the term by mfg. It
> seems Seagate, & WD don't hack it. The Japanese drives did, but since
> the $ dropped -
>
Let's see, first you put my name on something I was quoting (with
attribution), delete the correct name of the person you are quoting, and
then call me a "factory droid." So I have some idea of your attention to
detail. Second, the short term failure rates are influenced by
components delivered, assembly, and treatment in shipping. So assembly
is controlled by the vendor, parts are influenced by supplier selected,
and delivery treatment is usually selected by the retailer. A local
clone maker found that delicate parts delivered on Wednesday had a
higher infant mortality that other days. Regular driver had Wednesday
off, sub thought "drop ship" was an unloading method, perhaps.

> One thing seemingly missed is the relationship between storage density
> and drive temp.variations.. Hard drive mfgs are going to be in deep
> doodoo when the SSD folks get price/perf in the lead lane. This year,
> I predict. And maybe another 2 for long term reliability to be in the
> lead..
>
I think you're an optimist on cost equality, people are changing to
green drives which are generally slower due to spin down or lower rpm,
because the cost of power and cooling is important. It's not clear if
current SSD tech will be around in five years, because there are new
technologies coming which are inherently far more stable for multiple
writes. The spinning platter may be ending, but the replacement is not
in sight. In ten years I doubt current SSD tech will be in use, replaced
by phase change, optical isomers, electron spin, or something still in a
lab. And the deployment of user visible large sectors (write chunks,
whatever) is not clear, if the next tech will work just as well with
smaller sectors, this may become a moot point.

> I believe that many [most?] RAID users are looking for results (long
> term archival) that are not intended in the design.We are about 2
> generations away from that being a reality - I think.. For other
> users, I would suggest a mirror machine. with both machines being
> scrubbed daily, and media being dissimilar in mfg and mfg date.
>
It's not clear that date of manufacture is particularly critical, while
date of deployment (in-use hours), probably is. But looking at the
Google disk paper, a crate of drives from the same batch doesn't all
drop dead at once, or close to it, so age in service is a factor, but
not likely a critical factor.
> I can't wait until Neil gets to (has to) play/work with the coming
> tech. Neat things are coming.
>
I would rather see some of the many things on the "someday list" get
implemented. It's more fun to play with new stuff than polish off the
uglies in the old, but the uglies are still there.

--
Bill Davidsen
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 03.05.2010 02:08:59 von Berkey B Walker

Bill Davidsen wrote:
> Berkey B Walker wrote:
>> Snip...
> Let's see, first you put my name on something I was quoting (with
> attribution), delete the correct name of the person you are quoting,
> and then call me a "factory droid." So I have some idea of your
> attention to detail.
...snip
> I would rather see some of the many things on the "someday list" get
> implemented. It's more fun to play with new stuff than polish off the
> uglies in the old, but the uglies are still there.
>
Items snipped to reduce net bandwidth.

Sir, I have had and have a high regard for your posts. I will gladly
give you a pass for having a seemingly unpleasant weekend. I have gone
through MY archives in this manner and have found absolutely NO
discrepancies between my copy of the posting to which I replied (which
was NOT addressed to you, but was a complete quote, my "sent" post, the
board's posting of that post, or..... your post to me (board copied).
I should probably just take heat in general, for some of my postings
back in my drinking days.

Sorry to have been a part of whatever was going on.

Best to you,
b-


All of the quotes are properly done. The post (read carefully) was
addresses to Mikael Abrahamsson
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Use of WD20EARS with MDADM

am 12.05.2010 15:06:47 von Tim Small

On 21/04/10 20:33, Phillip Susi wrote:
> On 4/21/2010 3:01 PM, Mark Knecht wrote:
>
>> It's unfortunate that this drive doesn't respond to APM commands either.
>>
> Yes it is.... would be nice if they made it respond to the standard APM
> command meant to configure this kind of behavior instead of creating a
> proprietary command and dos utility to invoke it. It's also a shame the
> drive lies about its physical sector size and has no way to turn that
> off. Might be a nice project to reverse engineer this utility on
> windows to figure out the command it sends down and add it to hdparm.
>

I did have a look at this a while ago (it seemed to be built with Watcom
(now open-source) and a public-domain DOS extended along with an
open-sourced binary-compressor - decompilation under Watcom on FreeDOS
seemed possible), and it seems that Mark Lord has done a chunk more
hacking on it, since there is a reference to initial support in the
latest hdparm changelog - which you might want to take a look at....

Cheers,

Tim.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html