Device utilization with RAID-1
Device utilization with RAID-1
am 16.08.2011 02:39:47 von Harald Nikolisin
Since a long time I'm unhappy with the performance of my RAID-1 system.
Investigation with atop and iostat unveils that the disk utilization is
always on a certain level although nothing happens on the system. In the
case of reading or writing files the utilization boosts always to 100%
for a long time. Very ugly examples are "Firefox starting" or "zypper
updates".
That is snapshot of the output of iostat:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 0,00 0,00 0,00 7,33 0,00 43,33
5,91 0,33 43,18 33,32 24,43
sdb 0,00 0,00 0,00 7,33 0,00 43,33
5,91 0,35 45,59 39,73 29,13
md0 0,00 0,00 0,00 0,67 0,00 5,33
8,00 0,00 0,00 0,00 0,00
md1 0,00 0,00 0,00 0,33 0,00 5,33
16,00 0,00 0,00 0,00 0,00
md2 0,00 0,00 0,00 0,33 0,00 1,00
3,00 0,00 0,00 0,00 0,00
md3 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00
md4 0,00 0,00 0,00 0,00 0,00 0,00
0,00 0,00 0,00 0,00 0,00
md5 0,00 0,00 0,00 0,33 0,00 0,67
2,00 0,00 0,00 0,00 0,00
I checked with mdadm if a resync happens or so, but this is not the
case. The state says "active" on all RAID devices - btw. what is the
difference to "clean" ?
thanks for any hints,
harald
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 16.08.2011 03:30:18 von Roberto Spadim
try raid10 with far layout
2011/8/15 Harald Nikolisin :
> Since a long time I'm unhappy with the performance of my RAID-1 syste=
m.
> Investigation with atop and iostat unveils that the disk utilization =
is
> always on a certain level although nothing happens on the system. In =
the
> case of reading or writing files the utilization boosts always to 100=
%
> for a long time. Very ugly examples are "Firefox starting" or "zypper
> updates".
> That is snapshot of the output of iostat:
>
>
> Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =A0 w/s =A0=
rsec/s =A0 wsec/s
> avgrq-sz avgqu-sz =A0 await =A0svctm =A0%util
> sda =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
7,33 =A0 =A0 0,00 =A0 =A043,33
> 5,91 =A0 =A0 0,33 =A0 43,18 =A033,32 =A024,43
> sdb =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
7,33 =A0 =A0 0,00 =A0 =A043,33
> 5,91 =A0 =A0 0,35 =A0 45,59 =A039,73 =A029,13
> md0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,67 =A0 =A0 0,00 =A0 =A0 5,33
> 8,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> md1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,33 =A0 =A0 0,00 =A0 =A0 5,33
> 16,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> md2 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,33 =A0 =A0 0,00 =A0 =A0 1,00
> 3,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> md3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,00 =A0 =A0 0,00 =A0 =A0 0,00
> 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> md4 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,00 =A0 =A0 0,00 =A0 =A0 0,00
> 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> md5 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 =A0=
0,33 =A0 =A0 0,00 =A0 =A0 0,67
> 2,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>
> I checked with mdadm if a resync happens or so, but this is not the
> case. The state says "active" on all RAID devices - btw. what is the
> difference to "clean" ?
>
> thanks for any hints,
> =A0harald
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 16.08.2011 19:25:13 von Harald Nikolisin
well, I have only 2 hard drives and no space for more..
Am 16.08.2011 03:29, schrieb Roberto Spadim:
> try raid10 far layout
>
> 2011/8/15 Harald Nikolisin
> >
>
> Since a long time I'm unhappy with the performance of my RAID-1 system.
> Investigation with atop and iostat unveils that the disk utilization is
> always on a certain level although nothing happens on the system. In the
> case of reading or writing files the utilization boosts always to 100%
> for a long time. Very ugly examples are "Firefox starting" or "zypper
> updates".
> That is snapshot of the output of iostat:
>
>
> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> avgrq-sz avgqu-sz await svctm %util
> sda 0,00 0,00 0,00 7,33 0,00 43,33
> 5,91 0,33 43,18 33,32 24,43
> sdb 0,00 0,00 0,00 7,33 0,00 43,33
> 5,91 0,35 45,59 39,73 29,13
> md0 0,00 0,00 0,00 0,67 0,00 5,33
> 8,00 0,00 0,00 0,00 0,00
> md1 0,00 0,00 0,00 0,33 0,00 5,33
> 16,00 0,00 0,00 0,00 0,00
> md2 0,00 0,00 0,00 0,33 0,00 1,00
> 3,00 0,00 0,00 0,00 0,00
> md3 0,00 0,00 0,00 0,00 0,00 0,00
> 0,00 0,00 0,00 0,00 0,00
> md4 0,00 0,00 0,00 0,00 0,00 0,00
> 0,00 0,00 0,00 0,00 0,00
> md5 0,00 0,00 0,00 0,33 0,00 0,67
> 2,00 0,00 0,00 0,00 0,00
>
> I checked with mdadm if a resync happens or so, but this is not the
> case. The state says "active" on all RAID devices - btw. what is the
> difference to "clean" ?
>
> thanks for any hints,
> harald
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
>
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Fwd: Re: Device utilization with RAID-1
am 18.08.2011 02:26:17 von Harald Nikolisin
hi,
I didn't want to complain in general about SW RAID-1 performance. I
simply think something is wrong with my setup and I have currently no
idea how to improve.
The basic questions (where I did not find an answer, neither in FAQ's
nor in forum discussions) are.
a) Is it normal that the hard drives show an permanent utilization
(around 20%) without any noticeable actions on the computer?
b) Should (as long as no resync happens) the state of mdadm active or clean?
cheers,
harald
well, I have only 2 hard drives and no space for more..
Am 16.08.2011 03:29, schrieb Roberto Spadim:
> try raid10 far layout
>
> 2011/8/15 Harald Nikolisin
> >
>
> Since a long time I'm unhappy with the performance of my RAID-1 system.
> Investigation with atop and iostat unveils that the disk utilization is
> always on a certain level although nothing happens on the system. In the
> case of reading or writing files the utilization boosts always to 100%
> for a long time. Very ugly examples are "Firefox starting" or "zypper
> updates".
> That is snapshot of the output of iostat:
>
>
> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> avgrq-sz avgqu-sz await svctm %util
> sda 0,00 0,00 0,00 7,33 0,00 43,33
> 5,91 0,33 43,18 33,32 24,43
> sdb 0,00 0,00 0,00 7,33 0,00 43,33
> 5,91 0,35 45,59 39,73 29,13
> md0 0,00 0,00 0,00 0,67 0,00 5,33
> 8,00 0,00 0,00 0,00 0,00
> md1 0,00 0,00 0,00 0,33 0,00 5,33
> 16,00 0,00 0,00 0,00 0,00
> md2 0,00 0,00 0,00 0,33 0,00 1,00
> 3,00 0,00 0,00 0,00 0,00
> md3 0,00 0,00 0,00 0,00 0,00 0,00
> 0,00 0,00 0,00 0,00 0,00
> md4 0,00 0,00 0,00 0,00 0,00 0,00
> 0,00 0,00 0,00 0,00 0,00
> md5 0,00 0,00 0,00 0,33 0,00 0,67
> 2,00 0,00 0,00 0,00 0,00
>
> I checked with mdadm if a resync happens or so, but this is not the
> case. The state says "active" on all RAID devices - btw. what is the
> difference to "clean" ?
>
> thanks for any hints,
> harald
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
>
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 18.08.2011 03:42:05 von NeilBrown
On Thu, 18 Aug 2011 02:26:17 +0200 Harald Nikolisin
wrote:
> hi,
>
> I didn't want to complain in general about SW RAID-1 performance. I
> simply think something is wrong with my setup and I have currently no
> idea how to improve.
>
> The basic questions (where I did not find an answer, neither in FAQ's
> nor in forum discussions) are.
> a) Is it normal that the hard drives show an permanent utilization
> (around 20%) without any noticeable actions on the computer?
No. If the array is resyncing or recovering then you would expect
utilization for as many hours as it takes - but that would show
in /proc/mdstat.
> b) Should (as long as no resync happens) the state of mdadm active or clean?
If anything has been written to the device in the last 200msec (including
e.g. access time updates) then expect it to be 'active'.
If nothing has been written for 200msecc or more, then expect it to be clean.
If you crash while it is active, a resync is needed.
If you crash while it is clean, no resync is needed.
If you don't crash at all .... that is best :-)
NeilBrown
>
> cheers,
> harald
>
> well, I have only 2 hard drives and no space for more..
>
> Am 16.08.2011 03:29, schrieb Roberto Spadim:
> > try raid10 far layout
> >
> > 2011/8/15 Harald Nikolisin
> > >
> >
> > Since a long time I'm unhappy with the performance of my RAID-1 system.
> > Investigation with atop and iostat unveils that the disk utilization is
> > always on a certain level although nothing happens on the system. In the
> > case of reading or writing files the utilization boosts always to 100%
> > for a long time. Very ugly examples are "Firefox starting" or "zypper
> > updates".
> > That is snapshot of the output of iostat:
> >
> >
> > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
> > avgrq-sz avgqu-sz await svctm %util
> > sda 0,00 0,00 0,00 7,33 0,00 43,33
> > 5,91 0,33 43,18 33,32 24,43
> > sdb 0,00 0,00 0,00 7,33 0,00 43,33
> > 5,91 0,35 45,59 39,73 29,13
> > md0 0,00 0,00 0,00 0,67 0,00 5,33
> > 8,00 0,00 0,00 0,00 0,00
> > md1 0,00 0,00 0,00 0,33 0,00 5,33
> > 16,00 0,00 0,00 0,00 0,00
> > md2 0,00 0,00 0,00 0,33 0,00 1,00
> > 3,00 0,00 0,00 0,00 0,00
> > md3 0,00 0,00 0,00 0,00 0,00 0,00
> > 0,00 0,00 0,00 0,00 0,00
> > md4 0,00 0,00 0,00 0,00 0,00 0,00
> > 0,00 0,00 0,00 0,00 0,00
> > md5 0,00 0,00 0,00 0,33 0,00 0,67
> > 2,00 0,00 0,00 0,00 0,00
> >
> > I checked with mdadm if a resync happens or so, but this is not the
> > case. The state says "active" on all RAID devices - btw. what is the
> > difference to "clean" ?
> >
> > thanks for any hints,
> > harald
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> >
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
> >
> >
> > --
> > Roberto Spadim
> > Spadim Technology / SPAEmpresarial
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 18.08.2011 08:44:29 von st0ff
Am 18.08.2011 03:42, schrieb NeilBrown:
> On Thu, 18 Aug 2011 02:26:17 +0200 Harald Nikolisin
> wrote:
>
>> hi,
>>
>> I didn't want to complain in general about SW RAID-1 performance. I
>> simply think something is wrong with my setup and I have currently no
>> idea how to improve.
>>
>> The basic questions (where I did not find an answer, neither in FAQ's
>> nor in forum discussions) are.
>> a) Is it normal that the hard drives show an permanent utilization
>> (around 20%) without any noticeable actions on the computer?
>
> No. If the array is resyncing or recovering then you would expect
> utilization for as many hours as it takes - but that would show
> in /proc/mdstat.
>
>> b) Should (as long as no resync happens) the state of mdadm active or clean?
>
> If anything has been written to the device in the last 200msec (including
> e.g. access time updates) then expect it to be 'active'.
> If nothing has been written for 200msecc or more, then expect it to be clean.
>
> If you crash while it is active, a resync is needed.
> If you crash while it is clean, no resync is needed.
> If you don't crash at all .... that is best :-)
>
> NeilBrown
>
I second that ;) Have you checked the SMART-Attributes of your disks,
are they still OK? But if they weren't, you wouldn't see that they're a
bit more busy, you'd only feel it from bad performance.
Indeed I think you need to find out which processes create your I/O
load, as it seems to be kind of a badly configured service/daemon which
slows down your whole computer that way... It's probably a good idea to
start with dstat and a wide screen :)
Stefan
>
>
>>
>> cheers,
>> harald
>>
>> well, I have only 2 hard drives and no space for more..
>>
>> Am 16.08.2011 03:29, schrieb Roberto Spadim:
>>> try raid10 far layout
>>>
>>> 2011/8/15 Harald Nikolisin
>>> >
>>>
>>> Since a long time I'm unhappy with the performance of my RAID-1 system.
>>> Investigation with atop and iostat unveils that the disk utilization is
>>> always on a certain level although nothing happens on the system. In the
>>> case of reading or writing files the utilization boosts always to 100%
>>> for a long time. Very ugly examples are "Firefox starting" or "zypper
>>> updates".
>>> That is snapshot of the output of iostat:
>>>
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
>>> avgrq-sz avgqu-sz await svctm %util
>>> sda 0,00 0,00 0,00 7,33 0,00 43,33
>>> 5,91 0,33 43,18 33,32 24,43
>>> sdb 0,00 0,00 0,00 7,33 0,00 43,33
>>> 5,91 0,35 45,59 39,73 29,13
>>> md0 0,00 0,00 0,00 0,67 0,00 5,33
>>> 8,00 0,00 0,00 0,00 0,00
>>> md1 0,00 0,00 0,00 0,33 0,00 5,33
>>> 16,00 0,00 0,00 0,00 0,00
>>> md2 0,00 0,00 0,00 0,33 0,00 1,00
>>> 3,00 0,00 0,00 0,00 0,00
>>> md3 0,00 0,00 0,00 0,00 0,00 0,00
>>> 0,00 0,00 0,00 0,00 0,00
>>> md4 0,00 0,00 0,00 0,00 0,00 0,00
>>> 0,00 0,00 0,00 0,00 0,00
>>> md5 0,00 0,00 0,00 0,33 0,00 0,67
>>> 2,00 0,00 0,00 0,00 0,00
>>>
>>> I checked with mdadm if a resync happens or so, but this is not the
>>> case. The state says "active" on all RAID devices - btw. what is the
>>> difference to "clean" ?
>>>
>>> thanks for any hints,
>>> harald
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>>
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 18.08.2011 15:44:19 von CoolCold
On Thu, Aug 18, 2011 at 5:42 AM, NeilBrown wrote:
> On Thu, 18 Aug 2011 02:26:17 +0200 Harald Nikolisin
com>
> wrote:
>
>> hi,
>>
>> I didn't want to complain in general about SW RAID-1 performance. I
>> simply think something is wrong with my setup and I have currently n=
o
>> idea how to improve.
>>
>> The basic questions (where I did not find an answer, neither in FAQ'=
s
>> nor in forum discussions) are.
>> a) Is it normal that the hard drives show an permanent utilization
>> (around 20%) without any noticeable actions on the computer?
>
> No. =A0If the array is resyncing or recovering then you would expect
> utilization for as many hours as it takes - but that would show
> in /proc/mdstat.
>
>> b) Should (as long as no resync happens) the state of mdadm active o=
r clean?
>
> If anything has been written to the device in the last 200msec (inclu=
ding
> e.g. access time updates) then expect it to be 'active'.
> If nothing has been written for 200msecc or more, then expect it to b=
e clean.
>
> If you crash while it is active, a resync is needed.
> If you crash while it is clean, no resync is needed.
> If you don't crash at all .... that is best :-)
I think this info should be wikified if not yet.
btw, I've experimented a bit on my /boot array (it doesn't being
updated, checked with iostat ), and:
root@m2:~# for i in {1..5};do mdname=3D"md0"; echo "iteration $i";
(mdadm --detail /dev/$mdname|grep 'State ';cat
/sys/block/$mdname/md/array_state;grep "$mdname :" /proc/mdstat);sleep
1;done
iteration 1
State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 2
State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 3
State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 4
State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
iteration 5
State : clean
clean
md0 : active raid1 sda1[0] sdb1[1]
so, mdadm --detail & array_state shows array is "clean", while
/proc/mdstat shows array is "active" (no reads/writes happen).
Some value is lieing or being misunderdstanded by me...
>
>
>
>>
>> cheers,
>> =A0 harald
>>
>> well, I have only 2 hard drives and no space for more..
>>
>> Am 16.08.2011 03:29, schrieb Roberto Spadim:
>> > try raid10 far layout
>> >
>> > 2011/8/15 Harald Nikolisin
>> > >
>> >
>> > =A0 =A0 Since a long time I'm unhappy with the performance of my R=
AID-1 system.
>> > =A0 =A0 Investigation with atop and iostat unveils that the disk u=
tilization is
>> > =A0 =A0 always on a certain level although nothing happens on the =
system. In the
>> > =A0 =A0 case of reading or writing files the utilization boosts al=
ways to 100%
>> > =A0 =A0 for a long time. Very ugly examples are "Firefox starting"=
or "zypper
>> > =A0 =A0 updates".
>> > =A0 =A0 That is snapshot of the output of iostat:
>> >
>> >
>> > =A0 =A0 Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0 =
=A0 w/s =A0 rsec/s =A0 wsec/s
>> > =A0 =A0 avgrq-sz avgqu-sz =A0 await =A0svctm =A0%util
>> > =A0 =A0 sda =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A07,33 =A0 =A0 0,00 =A0 =A043,33
>> > =A0 =A0 5,91 =A0 =A0 0,33 =A0 43,18 =A033,32 =A024,43
>> > =A0 =A0 sdb =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A07,33 =A0 =A0 0,00 =A0 =A043,33
>> > =A0 =A0 5,91 =A0 =A0 0,35 =A0 45,59 =A039,73 =A029,13
>> > =A0 =A0 md0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,67 =A0 =A0 0,00 =A0 =A0 5,33
>> > =A0 =A0 8,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> > =A0 =A0 md1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 5,33
>> > =A0 =A0 16,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> > =A0 =A0 md2 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 1,00
>> > =A0 =A0 3,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> > =A0 =A0 md3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,00 =A0 =A0 0,00 =A0 =A0 0,00
>> > =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> > =A0 =A0 md4 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,00 =A0 =A0 0,00 =A0 =A0 0,00
>> > =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> > =A0 =A0 md5 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00=
,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 0,67
>> > =A0 =A0 2,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
>> >
>> > =A0 =A0 I checked with mdadm if a resync happens or so, but this i=
s not the
>> > =A0 =A0 case. The state says "active" on all RAID devices - btw. w=
hat is the
>> > =A0 =A0 difference to "clean" ?
>> >
>> > =A0 =A0 thanks for any hints,
>> > =A0 =A0 =A0harald
>> > =A0 =A0 --
>> > =A0 =A0 To unsubscribe from this list: send the line "unsubscribe =
linux-raid" in
>> > =A0 =A0 the body of a message to majordomo@vger.kernel.org
>> > =A0 =A0
>> > =A0 =A0 More majordomo info at =A0http://vger.kernel.org/majordomo=
-info.html
>> >
>> >
>> >
>> >
>> > --
>> > Roberto Spadim
>> > Spadim Technology / SPAEmpresarial
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 19.08.2011 02:31:35 von NeilBrown
On Thu, 18 Aug 2011 17:44:19 +0400 CoolCold wro=
te:
> On Thu, Aug 18, 2011 at 5:42 AM, NeilBrown wrote:
> > On Thu, 18 Aug 2011 02:26:17 +0200 Harald Nikolisin
l.com>
> > wrote:
> >
> >> hi,
> >>
> >> I didn't want to complain in general about SW RAID-1 performance. =
I
> >> simply think something is wrong with my setup and I have currently=
no
> >> idea how to improve.
> >>
> >> The basic questions (where I did not find an answer, neither in FA=
Q's
> >> nor in forum discussions) are.
> >> a) Is it normal that the hard drives show an permanent utilization
> >> (around 20%) without any noticeable actions on the computer?
> >
> > No. =A0If the array is resyncing or recovering then you would expec=
t
> > utilization for as many hours as it takes - but that would show
> > in /proc/mdstat.
> >
> >> b) Should (as long as no resync happens) the state of mdadm active=
or clean?
> >
> > If anything has been written to the device in the last 200msec (inc=
luding
> > e.g. access time updates) then expect it to be 'active'.
> > If nothing has been written for 200msecc or more, then expect it to=
be clean.
> >
> > If you crash while it is active, a resync is needed.
> > If you crash while it is clean, no resync is needed.
> > If you don't crash at all .... that is best :-)
>=20
> I think this info should be wikified if not yet.
>=20
> btw, I've experimented a bit on my /boot array (it doesn't being
> updated, checked with iostat ), and:
> root@m2:~# for i in {1..5};do mdname=3D"md0"; echo "iteration $i";
> (mdadm --detail /dev/$mdname|grep 'State ';cat
> /sys/block/$mdname/md/array_state;grep "$mdname :" /proc/mdstat);slee=
p
> 1;done
> iteration 1
> State : clean
> clean
> md0 : active raid1 sda1[0] sdb1[1]
> iteration 2
> State : clean
> clean
> md0 : active raid1 sda1[0] sdb1[1]
> iteration 3
> State : clean
> clean
> md0 : active raid1 sda1[0] sdb1[1]
> iteration 4
> State : clean
> clean
> md0 : active raid1 sda1[0] sdb1[1]
> iteration 5
> State : clean
> clean
> md0 : active raid1 sda1[0] sdb1[1]
>=20
> so, mdadm --detail & array_state shows array is "clean", while
> /proc/mdstat shows array is "active" (no reads/writes happen).
>=20
> Some value is lieing or being misunderdstanded by me...
In mdstat you have 'active' or 'inactive'. You cannot access an array =
at all
until it is active. If you are assembling an array bit by bit with "md=
adm
-I", it will be inactive until all the devices appear. Then it will be
active.
In mdadm "State :" you have 'active' or 'clean'. as described above. I=
t used
to be 'dirty' or 'clean' but people were confused by having 'dirty' arr=
ays in
normal operation. So I changed it to 'active' and now it confuses a
different set of people. You just can't win can you :-)
NeilBrown
>=20
> >
> >
> >
> >>
> >> cheers,
> >> =A0 harald
> >>
> >> well, I have only 2 hard drives and no space for more..
> >>
> >> Am 16.08.2011 03:29, schrieb Roberto Spadim:
> >> > try raid10 far layout
> >> >
> >> > 2011/8/15 Harald Nikolisin
> >> > >
> >> >
> >> > =A0 =A0 Since a long time I'm unhappy with the performance of my=
RAID-1 system.
> >> > =A0 =A0 Investigation with atop and iostat unveils that the disk=
utilization is
> >> > =A0 =A0 always on a certain level although nothing happens on th=
e system. In the
> >> > =A0 =A0 case of reading or writing files the utilization boosts =
always to 100%
> >> > =A0 =A0 for a long time. Very ugly examples are "Firefox startin=
g" or "zypper
> >> > =A0 =A0 updates".
> >> > =A0 =A0 That is snapshot of the output of iostat:
> >> >
> >> >
> >> > =A0 =A0 Device: =A0 =A0 =A0 =A0 rrqm/s =A0 wrqm/s =A0 =A0 r/s =A0=
=A0 w/s =A0 rsec/s =A0 wsec/s
> >> > =A0 =A0 avgrq-sz avgqu-sz =A0 await =A0svctm =A0%util
> >> > =A0 =A0 sda =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A07,33 =A0 =A0 0,00 =A0 =A043,33
> >> > =A0 =A0 5,91 =A0 =A0 0,33 =A0 43,18 =A033,32 =A024,43
> >> > =A0 =A0 sdb =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A07,33 =A0 =A0 0,00 =A0 =A043,33
> >> > =A0 =A0 5,91 =A0 =A0 0,35 =A0 45,59 =A039,73 =A029,13
> >> > =A0 =A0 md0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,67 =A0 =A0 0,00 =A0 =A0 5,33
> >> > =A0 =A0 8,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> > =A0 =A0 md1 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 5,33
> >> > =A0 =A0 16,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> > =A0 =A0 md2 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 1,00
> >> > =A0 =A0 3,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> > =A0 =A0 md3 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,00 =A0 =A0 0,00 =A0 =A0 0,00
> >> > =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> > =A0 =A0 md4 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,00 =A0 =A0 0,00 =A0 =A0 0,00
> >> > =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> > =A0 =A0 md5 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0,00 =A0 =A0 0,00 =A0 =A0=
0,00 =A0 =A00,33 =A0 =A0 0,00 =A0 =A0 0,67
> >> > =A0 =A0 2,00 =A0 =A0 0,00 =A0 =A00,00 =A0 0,00 =A0 0,00
> >> >
> >> > =A0 =A0 I checked with mdadm if a resync happens or so, but this=
is not the
> >> > =A0 =A0 case. The state says "active" on all RAID devices - btw.=
what is the
> >> > =A0 =A0 difference to "clean" ?
> >> >
> >> > =A0 =A0 thanks for any hints,
> >> > =A0 =A0 =A0harald
> >> > =A0 =A0 --
> >> > =A0 =A0 To unsubscribe from this list: send the line "unsubscrib=
e linux-raid" in
> >> > =A0 =A0 the body of a message to majordomo@vger.kernel.org
> >> > =A0 =A0
> >> > =A0 =A0 More majordomo info at =A0http://vger.kernel.org/majordo=
mo-info.html
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Roberto Spadim
> >> > Spadim Technology / SPAEmpresarial
> >>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-ra=
id" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at =A0http://vger.kernel.org/majordomo-info.ht=
ml
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at =A0http://vger.kernel.org/majordomo-info.htm=
l
> >
>=20
>=20
>=20
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 19.08.2011 20:32:37 von Maurice
On 8/18/2011 6:31 PM, NeilBrown wrote:
> ..
> In mdstat you have 'active' or 'inactive'. You cannot access an array at all
> until it is active. If you are assembling an array bit by bit with "mdadm
> -I", it will be inactive until all the devices appear. Then it will be
> active.
>
> In mdadm "State :" you have 'active' or 'clean'. as described above. It used
> to be 'dirty' or 'clean' but people were confused by having 'dirty' arrays in
> normal operation. So I changed it to 'active' and now it confuses a
> different set of people. You just can't win can you :-)
>
> NeilBrown
mdstat:
"Enabled" or "Disabled" perhaps?
That matches what most commercial hardware RAID interfaces use.
--
Cheers,
Maurice Hilarius
eMail: /mhilarius@gmail.com/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Device utilization with RAID-1
am 20.08.2011 05:13:03 von John Robinson
On 19/08/2011 19:32, maurice wrote:
> On 8/18/2011 6:31 PM, NeilBrown wrote:
>> ..
>> In mdstat you have 'active' or 'inactive'. You cannot access an array
>> at all
>> until it is active. If you are assembling an array bit by bit with "mdadm
>> -I", it will be inactive until all the devices appear. Then it will be
>> active.
>>
>> In mdadm "State :" you have 'active' or 'clean'. as described above.
>> It used
>> to be 'dirty' or 'clean' but people were confused by having 'dirty'
>> arrays in
>> normal operation. So I changed it to 'active' and now it confuses a
>> different set of people. You just can't win can you :-)
>>
>> NeilBrown
>
> mdstat:
> "Enabled" or "Disabled" perhaps?
>
> That matches what most commercial hardware RAID interfaces use.
Does it? It sounds more like an administrative action than a current
status. I would have thought "online" or "offline" - unless that means
something else somewhere else.
And for mdadm state: how about "busy" and "idle"? Hmm maybe just "busy"
instead of "active" or "dirty"; we don't want to start an array with
--assume-idle...
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html