Performance question, RAID5
Performance question, RAID5
am 29.01.2011 23:48:06 von mathias.buren
Hi,
I'm wondering if the performance I'm getting is OK or if there's
something I can do about it. Also, where the potential bottlenecks
are.
Setup: 6x2TB HDDs, their performance:
/dev/sdb:
Timing cached reads: 1322 MB in 2.00 seconds = 661.51 MB/sec
Timing buffered disk reads: 362 MB in 3.02 seconds = 120.06 MB/sec
/dev/sdc:
Timing cached reads: 1282 MB in 2.00 seconds = 641.20 MB/sec
Timing buffered disk reads: 342 MB in 3.01 seconds = 113.53 MB/sec
/dev/sdd:
Timing cached reads: 1282 MB in 2.00 seconds = 640.55 MB/sec
Timing buffered disk reads: 344 MB in 3.00 seconds = 114.58 MB/sec
/dev/sde:
Timing cached reads: 1328 MB in 2.00 seconds = 664.46 MB/sec
Timing buffered disk reads: 350 MB in 3.01 seconds = 116.37 MB/sec
/dev/sdf:
Timing cached reads: 1304 MB in 2.00 seconds = 651.55 MB/sec
Timing buffered disk reads: 378 MB in 3.01 seconds = 125.62 MB/sec
/dev/sdg:
Timing cached reads: 1324 MB in 2.00 seconds = 661.91 MB/sec
Timing buffered disk reads: 400 MB in 3.00 seconds = 133.15 MB/sec
These are used in a RAID5 setup:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[0] sdg1[6] sde1[5] sdc1[3] sdd1[4] sdb1[1]
9751756800 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
unused devices:
/dev/md0:
Version : 1.2
Creation Time : Tue Oct 19 08:58:41 2010
Raid Level : raid5
Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri Jan 28 14:55:48 2011
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : ion:0 (local to host ion)
UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
Events : 3035769
Number Major Minor RaidDevice State
0 8 81 0 active sync /dev/sdf1
1 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
3 8 33 3 active sync /dev/sdc1
5 8 65 4 active sync /dev/sde1
6 8 97 5 active sync /dev/sdg1
As you can see they are partitioned. They are all identical like this:
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0e5b3a7a
Device Boot Start End Blocks Id System
/dev/sdb1 2048 3907029167 1953513560 fd Linux raid autodetect
On this I run LVM:
--- Physical volume ---
PV Name /dev/md0
VG Name lvstorage
PV Size 9.08 TiB / not usable 1.00 MiB
Allocatable yes (but full)
PE Size 1.00 MiB
Total PE 9523199
Free PE 0
Allocated PE 9523199
PV UUID YLEUKB-klxF-X3gF-6dG3-DL4R-xebv-6gKQc2
On top of the LVM I have:
--- Volume group ---
VG Name lvstorage
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.08 TiB
PE Size 1.00 MiB
Total PE 9523199
Alloc PE / Size 9523199 / 9.08 TiB
Free PE / Size 0 / 0
VG UUID Xd0HTM-azdN-v9kJ-C7vD-COcU-Cnn8-6AJ6hI
And in turn:
--- Logical volume ---
LV Name /dev/lvstorage/storage
VG Name lvstorage
LV UUID 9wsJ0u-0QMs-lL5h-E2UA-7QJa-l46j-oWkSr3
LV Write Access read/write
LV Status available
# open 1
LV Size 9.08 TiB
Current LE 9523199
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1280
Block device 254:1
And on that (sorry) there's the ext4 partition:
/dev/mapper/lvstorage-storage on /raid5volume type ext4
(rw,noatime,barrier=1,nouser_xattr)
Here are the numbers:
/raid5volume $ time dd if=/dev/zero of=./bigfile.tmp bs=1M count=8192
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 94.0967 s, 91.3 MB/s
real 1m34.102s
user 0m0.107s
sys 0m54.693s
/raid5volume $ time dd if=./bigfile.tmp of=/dev/null bs=1M
8192+0 records in
8192+0 records out
8589934592 bytes (8.6 GB) copied, 37.8557 s, 227 MB/s
real 0m37.861s
user 0m0.053s
sys 0m23.608s
I saw that the process md0_raid5 spike sometimes on CPU usage. This is
an Atom @ 1.6GHz, is that what is limiting the results? Here's
bonnie++:
/raid5volume/temp $ time bonnie++ -d ./ -m ion
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ion 7G 13726 98 148051 87 68020 41 14547 99 286647
61 404.1 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 20707 99 +++++ +++ 25870 99 21242 98 +++++ +++ 25630 100
ion,7G,13726,98,148051,87,68020,41,14547,99,286647,61,404.1, 2,16,20707,99,+++++,+++,25870,99,21242,98,+++++,+++,25630,10 0
real 20m54.320s
user 16m10.447s
sys 2m45.543s
Thanks in advance,
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 29.01.2011 23:53:52 von Roman Mamedov
--Sig_/CjMac9=JyrGsr2eG/OgMIdw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sat, 29 Jan 2011 22:48:06 +0000
Mathias Burén wrote:
> Hi,
>=20
> I'm wondering if the performance I'm getting is OK or if there's
> something I can do about it. Also, where the potential bottlenecks
> are.
How are your disks plugged in? Which controller model(s), which bus.
But generally, on an Atom 1.6 Ghz those seem like good results.
--=20
With respect,
Roman
--Sig_/CjMac9=JyrGsr2eG/OgMIdw
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk1EmoAACgkQTLKSvz+PZwjvVwCfV/jNStK0hV/yxhzoBo5i 1ABT
7AsAn10vw1uyLM+6u1LWuiUyIMH5eFuu
=in+R
-----END PGP SIGNATURE-----
--Sig_/CjMac9=JyrGsr2eG/OgMIdw--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 00:26:44 von CoolCold
You may need to increase stripe cache size
http://peterkieser.com/2009/11/29/raid-mdraid-stripe_cache_s ize-vs-writ=
e-transfer/
On Sun, Jan 30, 2011 at 1:48 AM, Mathias Bur=E9n
om> wrote:
> Hi,
>
> I'm wondering if the performance I'm getting is OK or if there's
> something I can do about it. Also, where the potential bottlenecks
> are.
>
> Setup: 6x2TB HDDs, their performance:
>
> /dev/sdb:
> =A0Timing cached reads: =A0 1322 MB in =A02.00 seconds =3D 661.51 MB/=
sec
> =A0Timing buffered disk reads: 362 MB in =A03.02 seconds =3D 120.06 M=
B/sec
>
> /dev/sdc:
> =A0Timing cached reads: =A0 1282 MB in =A02.00 seconds =3D 641.20 MB/=
sec
> =A0Timing buffered disk reads: 342 MB in =A03.01 seconds =3D 113.53 M=
B/sec
>
> /dev/sdd:
> =A0Timing cached reads: =A0 1282 MB in =A02.00 seconds =3D 640.55 MB/=
sec
> =A0Timing buffered disk reads: 344 MB in =A03.00 seconds =3D 114.58 M=
B/sec
>
> /dev/sde:
> =A0Timing cached reads: =A0 1328 MB in =A02.00 seconds =3D 664.46 MB/=
sec
> =A0Timing buffered disk reads: 350 MB in =A03.01 seconds =3D 116.37 M=
B/sec
>
> /dev/sdf:
> =A0Timing cached reads: =A0 1304 MB in =A02.00 seconds =3D 651.55 MB/=
sec
> =A0Timing buffered disk reads: 378 MB in =A03.01 seconds =3D 125.62 M=
B/sec
>
> /dev/sdg:
> =A0Timing cached reads: =A0 1324 MB in =A02.00 seconds =3D 661.91 MB/=
sec
> =A0Timing buffered disk reads: 400 MB in =A03.00 seconds =3D 133.15 M=
B/sec
>
> These are used in a RAID5 setup:
>
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdf1[0] sdg1[6] sde1[5] sdc1[3] sdd1[4] sdb1[1]
> =A0 =A0 =A09751756800 blocks super 1.2 level 5, 64k chunk, algorithm =
2 [6/6] [UUUUUU]
>
> unused devices:
>
> /dev/md0:
> =A0 =A0 =A0 =A0Version : 1.2
> =A0Creation Time : Tue Oct 19 08:58:41 2010
> =A0 =A0 Raid Level : raid5
> =A0 =A0 Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
> =A0Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
> =A0 Raid Devices : 6
> =A0Total Devices : 6
> =A0 =A0Persistence : Superblock is persistent
>
> =A0 =A0Update Time : Fri Jan 28 14:55:48 2011
> =A0 =A0 =A0 =A0 =A0State : clean
> =A0Active Devices : 6
> Working Devices : 6
> =A0Failed Devices : 0
> =A0Spare Devices : 0
>
> =A0 =A0 =A0 =A0 Layout : left-symmetric
> =A0 =A0 Chunk Size : 64K
>
> =A0 =A0 =A0 =A0 =A0 Name : ion:0 =A0(local to host ion)
> =A0 =A0 =A0 =A0 =A0 UUID : e6595c64:b3ae90b3:f01133ac:3f402d20
> =A0 =A0 =A0 =A0 Events : 3035769
>
> =A0 =A0Number =A0 Major =A0 Minor =A0 RaidDevice State
> =A0 =A0 =A0 0 =A0 =A0 =A0 8 =A0 =A0 =A0 81 =A0 =A0 =A0 =A00 =A0 =A0 =A0=
active sync =A0 /dev/sdf1
> =A0 =A0 =A0 1 =A0 =A0 =A0 8 =A0 =A0 =A0 17 =A0 =A0 =A0 =A01 =A0 =A0 =A0=
active sync =A0 /dev/sdb1
> =A0 =A0 =A0 4 =A0 =A0 =A0 8 =A0 =A0 =A0 49 =A0 =A0 =A0 =A02 =A0 =A0 =A0=
active sync =A0 /dev/sdd1
> =A0 =A0 =A0 3 =A0 =A0 =A0 8 =A0 =A0 =A0 33 =A0 =A0 =A0 =A03 =A0 =A0 =A0=
active sync =A0 /dev/sdc1
> =A0 =A0 =A0 5 =A0 =A0 =A0 8 =A0 =A0 =A0 65 =A0 =A0 =A0 =A04 =A0 =A0 =A0=
active sync =A0 /dev/sde1
> =A0 =A0 =A0 6 =A0 =A0 =A0 8 =A0 =A0 =A0 97 =A0 =A0 =A0 =A05 =A0 =A0 =A0=
active sync =A0 /dev/sdg1
>
> As you can see they are partitioned. They are all identical like this=
:
>
> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
> 81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sector=
s
> Units =3D sectors of 1 * 512 =3D 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0e5b3a7a
>
> =A0 Device Boot =A0 =A0 =A0Start =A0 =A0 =A0 =A0 End =A0 =A0 =A0Block=
s =A0 Id =A0System
> /dev/sdb1 =A0 =A0 =A0 =A0 =A0 =A02048 =A03907029167 =A01953513560 =A0=
fd =A0Linux raid autodetect
>
> On this I run LVM:
>
> =A0--- Physical volume ---
> =A0PV Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 /dev/md0
> =A0VG Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 lvstorage
> =A0PV Size =A0 =A0 =A0 =A0 =A0 =A0 =A0 9.08 TiB / not usable 1.00 MiB
> =A0Allocatable =A0 =A0 =A0 =A0 =A0 yes (but full)
> =A0PE Size =A0 =A0 =A0 =A0 =A0 =A0 =A0 1.00 MiB
> =A0Total PE =A0 =A0 =A0 =A0 =A0 =A0 =A09523199
> =A0Free PE =A0 =A0 =A0 =A0 =A0 =A0 =A0 0
> =A0Allocated PE =A0 =A0 =A0 =A0 =A09523199
> =A0PV UUID =A0 =A0 =A0 =A0 =A0 =A0 =A0 YLEUKB-klxF-X3gF-6dG3-DL4R-xeb=
v-6gKQc2
>
> On top of the LVM I have:
>
> =A0--- Volume group ---
> =A0VG Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 lvstorage
> =A0System ID
> =A0Format =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0lvm2
> =A0Metadata Areas =A0 =A0 =A0 =A01
> =A0Metadata Sequence No =A06
> =A0VG Access =A0 =A0 =A0 =A0 =A0 =A0 read/write
> =A0VG Status =A0 =A0 =A0 =A0 =A0 =A0 resizable
> =A0MAX LV =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
> =A0Cur LV =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01
> =A0Open LV =A0 =A0 =A0 =A0 =A0 =A0 =A0 1
> =A0Max PV =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
> =A0Cur PV =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01
> =A0Act PV =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01
> =A0VG Size =A0 =A0 =A0 =A0 =A0 =A0 =A0 9.08 TiB
> =A0PE Size =A0 =A0 =A0 =A0 =A0 =A0 =A0 1.00 MiB
> =A0Total PE =A0 =A0 =A0 =A0 =A0 =A0 =A09523199
> =A0Alloc PE / Size =A0 =A0 =A0 9523199 / 9.08 TiB
> =A0Free =A0PE / Size =A0 =A0 =A0 0 / 0
> =A0VG UUID =A0 =A0 =A0 =A0 =A0 =A0 =A0 Xd0HTM-azdN-v9kJ-C7vD-COcU-Cnn=
8-6AJ6hI
>
> And in turn:
>
> =A0--- Logical volume ---
> =A0LV Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/dev/lvstorage/storage
> =A0VG Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0lvstorage
> =A0LV UUID =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A09wsJ0u-0QMs-lL5h-E2UA-7QJa-=
l46j-oWkSr3
> =A0LV Write Access =A0 =A0 =A0 =A0read/write
> =A0LV Status =A0 =A0 =A0 =A0 =A0 =A0 =A0available
> =A0# open =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 1
> =A0LV Size =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A09.08 TiB
> =A0Current LE =A0 =A0 =A0 =A0 =A0 =A0 9523199
> =A0Segments =A0 =A0 =A0 =A0 =A0 =A0 =A0 1
> =A0Allocation =A0 =A0 =A0 =A0 =A0 =A0 inherit
> =A0Read ahead sectors =A0 =A0 auto
> =A0- currently set to =A0 =A0 1280
> =A0Block device =A0 =A0 =A0 =A0 =A0 254:1
>
> And on that (sorry) there's the ext4 partition:
>
> /dev/mapper/lvstorage-storage on /raid5volume type ext4
> (rw,noatime,barrier=3D1,nouser_xattr)
>
> Here are the numbers:
>
> /raid5volume $ time dd if=3D/dev/zero of=3D./bigfile.tmp bs=3D1M coun=
t=3D8192
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 94.0967 s, 91.3 MB/s
>
> real =A0 =A01m34.102s
> user =A0 =A00m0.107s
> sys =A0 =A0 0m54.693s
>
> /raid5volume $ time dd if=3D./bigfile.tmp of=3D/dev/null bs=3D1M
> 8192+0 records in
> 8192+0 records out
> 8589934592 bytes (8.6 GB) copied, 37.8557 s, 227 MB/s
>
> real =A0 =A00m37.861s
> user =A0 =A00m0.053s
> sys =A0 =A0 0m23.608s
>
> I saw that the process md0_raid5 spike sometimes on CPU usage. This i=
s
> an Atom @ 1.6GHz, is that what is limiting the results? Here's
> bonnie++:
>
> /raid5volume/temp $ time bonnie++ -d ./ -m ion
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.03e =A0 =A0 =A0 ------Sequential Output------ --Sequential =
Input- --Random-
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0-Per Chr- --Block-- -Rewrite- =
-Per Chr- --Block-- --Seeks--
> Machine =A0 =A0 =A0 =A0Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K=
/sec %CP =A0/sec %CP
> ion =A0 =A0 =A0 =A0 =A0 =A0 =A07G 13726 =A098 148051 =A087 68020 =A04=
1 14547 =A099 286647
> 61 404.1 =A0 2
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0------Sequential Create------ =
--------Random Create--------
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0-Create-- --Read--- -Delete-- =
-Create-- --Read--- -Delete--
> =A0 =A0 =A0 =A0 =A0 =A0 =A0files =A0/sec %CP =A0/sec %CP =A0/sec %CP =
=A0/sec %CP =A0/sec %CP =A0/sec %CP
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 16 20707 =A099 +++++ +++ 25870 =A099 =
21242 =A098 +++++ +++ 25630 100
> ion,7G,13726,98,148051,87,68020,41,14547,99,286647,61,404.1, 2,16,2070=
7,99,+++++,+++,25870,99,21242,98,+++++,+++,25630,100
>
> real =A0 =A020m54.320s
> user =A0 =A016m10.447s
> sys =A0 =A0 2m45.543s
>
>
> Thanks in advance,
> // Mathias
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 00:44:01 von mathias.buren
On 29 January 2011 22:53, Roman Mamedov wrote:
> On Sat, 29 Jan 2011 22:48:06 +0000
> Mathias Burén wrote:
>
>> Hi,
>>
>> I'm wondering if the performance I'm getting is OK or if there's
>> something I can do about it. Also, where the potential bottlenecks
>> are.
>
> How are your disks plugged in? Which controller model(s), which bus.
> But generally, on an Atom 1.6 Ghz those seem like good results.
>
> --
> With respect,
> Roman
>
Hi,
Sorry, of course I should've included that. Here's the info:
~/bin $ sudo ./drivescan.sh
Controller device @ pci0000:00/0000:00:0b.0 [ahci]
SATA controller: nVidia Corporation MCP79 AHCI Controller (rev b1)
host0: /dev/sda ATA Corsair CSSD-F60 {SN: 10326505580009990027}
host1: /dev/sdb ATA WDC WD20EARS-00M {SN: WD-WCAZA1022443}
host2: /dev/sdc ATA WDC WD20EARS-00M {SN: WD-WMAZ20152590}
host3: /dev/sdd ATA WDC WD20EARS-00M {SN: WD-WMAZ20188479}
host4: [Empty]
host5: [Empty]
Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
SCSI storage controller: HighPoint Technologies, Inc. RocketRAID
230x 4 Port SATA-II Controller (rev 02)
host6: [Empty]
host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800964 }
host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA1000331}
host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800850 }
It's all SATA 3Gbs. OK, so from what you're saying I should see
significantly better results on a better CPU? The HDDs should be able
to push 80MB/s (read or write), and that should yield at least 5*80 =3D
400MB/s (-1 for parity) on easy (sequential?) reads.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 00:57:06 von Roman Mamedov
--Sig_/oo+1xe+RAUAaTX0lETMt__a
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sat, 29 Jan 2011 23:44:01 +0000
Mathias Burén wrote:
> Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
> SCSI storage controller: HighPoint Technologies, Inc. RocketRAID
> 230x 4 Port SATA-II Controller (rev 02)
> host6: [Empty]
> host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800964 }
> host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA1000331}
> host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800850 }
Does this controller support PCI-E 2.0? I doubt it.
Does you Atom mainboard support PCI-E 2.0? I highly doubt it.
And if PCI-E 1.0/1.1 is used, these last 3 drives are limited to 250 MB/sec.
in total, which in reality will be closer to 200 MB/sec.
> It's all SATA 3Gbs. OK, so from what you're saying I should see
> significantly better results on a better CPU? The HDDs should be able
> to push 80MB/s (read or write), and that should yield at least 5*80 =3D
> 400MB/s (-1 for parity) on easy (sequential?) reads.
According to the hdparm benchmark, your CPU can not read faster than 640
MB/sec from _RAM_, and that's just plain easy linear data from a buffer. So=
it
is perhaps not promising with regard to whether you will get 400MB/sec read=
ing
from RAID6 (with all the corresponding overheads) or not.
--=20
With respect,
Roman
--Sig_/oo+1xe+RAUAaTX0lETMt__a
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk1EqVIACgkQTLKSvz+PZwgTmACdHPJfvMBbJBPJ8Nr4YcBA ifjC
OkEAnRUNrZFapULqL24WO21Mat8rri2+
=GJa0
-----END PGP SIGNATURE-----
--Sig_/oo+1xe+RAUAaTX0lETMt__a--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 01:15:46 von Stan Hoeppner
Roman Mamedov put forth on 1/29/2011 5:57 PM:
> On Sat, 29 Jan 2011 23:44:01 +0000
> Mathias Burén wrote:
>=20
>> Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
>> SCSI storage controller: HighPoint Technologies, Inc. RocketRAID
>> 230x 4 Port SATA-II Controller (rev 02)
>> host6: [Empty]
>> host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800964 }
>> host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA1000331}
>> host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ800850 }
>=20
> Does this controller support PCI-E 2.0? I doubt it.
> Does you Atom mainboard support PCI-E 2.0? I highly doubt it.
> And if PCI-E 1.0/1.1 is used, these last 3 drives are limited to 250 =
MB/sec.
> in total, which in reality will be closer to 200 MB/sec.
>=20
>> It's all SATA 3Gbs. OK, so from what you're saying I should see
>> significantly better results on a better CPU? The HDDs should be abl=
e
>> to push 80MB/s (read or write), and that should yield at least 5*80 =
=3D
>> 400MB/s (-1 for parity) on easy (sequential?) reads.
>=20
> According to the hdparm benchmark, your CPU can not read faster than =
640
> MB/sec from _RAM_, and that's just plain easy linear data from a buff=
er. So it
> is perhaps not promising with regard to whether you will get 400MB/se=
c reading
> from RAID6 (with all the corresponding overheads) or not.
It's also not promising given that 4 of his 6 drives are WDC-WD20EARS, =
which
suck harder than a Dirt Devil at 240 volts, and the fact his 6 drives d=
on't
match. Sure, you say "Non matching drives are what software RAID is fo=
r right?"
Wrong, if you want best performance.
About the only things that might give you a decent boost at this point =
are some
EXT4 mount options in /etc/fstab: data=3Dwriteback,barrier=3D0
The first eliminates strict write ordering. The second disables write =
barriers,
so the drive's caches don't get flushed by Linux, and instead work as t=
he
firmware intends. The first of these is safe. The second may cause so=
me
additional data loss if writes are in flight when the power goes out or=
the
kernel crashes. I'd recommend adding both to fstab, reboot and run you=
r tests.
Post the results here.
If you have a decent UPS and auto shutdown software to down the system =
when the
battery gets low during an outage, keep these settings if they yield
substantially better performance.
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 01:18:34 von mathias.buren
On 29 January 2011 23:26, CoolCold wrote:
> You may need to increase stripe cache size
> http://peterkieser.com/2009/11/29/raid-mdraid-stripe_cache_s ize-vs-wr=
ite-transfer/
>
> On Sun, Jan 30, 2011 at 1:48 AM, Mathias Burén
ail.com> wrote:
>> Hi,
>>
>> I'm wondering if the performance I'm getting is OK or if there's
>> something I can do about it. Also, where the potential bottlenecks
>> are.
>>
>> Setup: 6x2TB HDDs, their performance:
>>
>> /dev/sdb:
>>  Timing cached reads:  1322 MB in  2.00 seconds =3D =
661.51 MB/sec
>>  Timing buffered disk reads: 362 MB in  3.02 seconds =3D 1=
20.06 MB/sec
>>
>> /dev/sdc:
>>  Timing cached reads:  1282 MB in  2.00 seconds =3D =
641.20 MB/sec
>>  Timing buffered disk reads: 342 MB in  3.01 seconds =3D 1=
13.53 MB/sec
>>
>> /dev/sdd:
>>  Timing cached reads:  1282 MB in  2.00 seconds =3D =
640.55 MB/sec
>>  Timing buffered disk reads: 344 MB in  3.00 seconds =3D 1=
14.58 MB/sec
>>
>> /dev/sde:
>>  Timing cached reads:  1328 MB in  2.00 seconds =3D =
664.46 MB/sec
>>  Timing buffered disk reads: 350 MB in  3.01 seconds =3D 1=
16.37 MB/sec
>>
>> /dev/sdf:
>>  Timing cached reads:  1304 MB in  2.00 seconds =3D =
651.55 MB/sec
>>  Timing buffered disk reads: 378 MB in  3.01 seconds =3D 1=
25.62 MB/sec
>>
>> /dev/sdg:
>>  Timing cached reads:  1324 MB in  2.00 seconds =3D =
661.91 MB/sec
>>  Timing buffered disk reads: 400 MB in  3.00 seconds =3D 1=
33.15 MB/sec
>>
>> These are used in a RAID5 setup:
>>
>> Personalities : [raid6] [raid5] [raid4]
>> md0 : active raid5 sdf1[0] sdg1[6] sde1[5] sdc1[3] sdd1[4] sdb1[1]
>> Â Â Â 9751756800 blocks super 1.2 level 5, 64k chunk, =
algorithm 2 [6/6] [UUUUUU]
>>
>> unused devices:
>>
>> /dev/md0:
>> Â Â Â Â Version : 1.2
>> Â Creation Time : Tue Oct 19 08:58:41 2010
>> Â Â Raid Level : raid5
>> Â Â Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
>> Â Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
>> Â Raid Devices : 6
>> Â Total Devices : 6
>> Â Â Persistence : Superblock is persistent
>>
>> Â Â Update Time : Fri Jan 28 14:55:48 2011
>> Â Â Â Â Â State : clean
>> Â Active Devices : 6
>> Working Devices : 6
>> Â Failed Devices : 0
>> Â Spare Devices : 0
>>
>> Â Â Â Â Layout : left-symmetric
>> Â Â Chunk Size : 64K
>>
>> Â Â Â Â Â Name : ion:0 Â (local to host=
ion)
>> Â Â Â Â Â UUID : e6595c64:b3ae90b3:f01133ac=
:3f402d20
>> Â Â Â Â Events : 3035769
>>
>>   Number  Major  Minor  RaidDevice Stat=
e
>> Â Â Â 0 Â Â Â 8 Â Â Â 8=
1     0    active sync  /=
dev/sdf1
>> Â Â Â 1 Â Â Â 8 Â Â Â 1=
7     1    active sync  /=
dev/sdb1
>> Â Â Â 4 Â Â Â 8 Â Â Â 4=
9     2    active sync  /=
dev/sdd1
>> Â Â Â 3 Â Â Â 8 Â Â Â 3=
3     3    active sync  /=
dev/sdc1
>> Â Â Â 5 Â Â Â 8 Â Â Â 6=
5     4    active sync  /=
dev/sde1
>> Â Â Â 6 Â Â Â 8 Â Â Â 9=
7     5    active sync  /=
dev/sdg1
>>
>> As you can see they are partitioned. They are all identical like thi=
s:
>>
>> Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
>> 81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 secto=
rs
>> Units =3D sectors of 1 * 512 =3D 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x0e5b3a7a
>>
>>  Device Boot    Start    =C2=
=A0 End    Blocks  Id  System
>> /dev/sdb1 Â Â Â Â Â Â 2048 Â 390702=
9167  1953513560  fd  Linux raid autodetect
>>
>> On this I run LVM:
>>
>> Â --- Physical volume ---
>>  PV Name        /dev/=
md0
>>  VG Name        lvsto=
rage
>>  PV Size        9.08 =
TiB / not usable 1.00 MiB
>>  Allocatable      yes (but full)
>>  PE Size        1.00 =
MiB
>> Â Total PE Â Â Â Â Â Â Â 95231=
99
>> Â Free PE Â Â Â Â Â Â Â 0
>> Â Allocated PE Â Â Â Â Â 9523199
>> Â PV UUID Â Â Â Â Â Â Â YLEUK=
B-klxF-X3gF-6dG3-DL4R-xebv-6gKQc2
>>
>> On top of the LVM I have:
>>
>> Â --- Volume group ---
>>  VG Name        lvsto=
rage
>> Â System ID
>>  Format         =
lvm2
>>  Metadata Areas     1
>>  Metadata Sequence No  6
>>  VG Access       read/write
>>  VG Status       resizable
>> Â MAX LV Â Â Â Â Â Â Â Â =
0
>> Â Cur LV Â Â Â Â Â Â Â Â =
1
>> Â Open LV Â Â Â Â Â Â Â 1
>> Â Max PV Â Â Â Â Â Â Â Â =
0
>> Â Cur PV Â Â Â Â Â Â Â Â =
1
>> Â Act PV Â Â Â Â Â Â Â Â =
1
>>  VG Size        9.08 =
TiB
>>  PE Size        1.00 =
MiB
>> Â Total PE Â Â Â Â Â Â Â 95231=
99
>>  Alloc PE / Size    9523199 / 9.08 TiB
>>  Free  PE / Size    0 / 0
>> Â VG UUID Â Â Â Â Â Â Â Xd0HT=
M-azdN-v9kJ-C7vD-COcU-Cnn8-6AJ6hI
>>
>> And in turn:
>>
>> Â --- Logical volume ---
>>  LV Name         =
/dev/lvstorage/storage
>>  VG Name         =
lvstorage
>> Â LV UUID Â Â Â Â Â Â Â Â =
9wsJ0u-0QMs-lL5h-E2UA-7QJa-l46j-oWkSr3
>>  LV Write Access     read/write
>>  LV Status        avai=
lable
>>  # open         =
1
>>  LV Size         =
9.08 TiB
>> Â Current LE Â Â Â Â Â Â 9523199
>>  Segments        1
>>  Allocation       inherit
>>  Read ahead sectors   auto
>>  - currently set to   1280
>>  Block device      254:1
>>
>> And on that (sorry) there's the ext4 partition:
>>
>> /dev/mapper/lvstorage-storage on /raid5volume type ext4
>> (rw,noatime,barrier=3D1,nouser_xattr)
>>
>> Here are the numbers:
>>
>> /raid5volume $ time dd if=3D/dev/zero of=3D./bigfile.tmp bs=3D1M cou=
nt=3D8192
>> 8192+0 records in
>> 8192+0 records out
>> 8589934592 bytes (8.6 GB) copied, 94.0967 s, 91.3 MB/s
>>
>> real   1m34.102s
>> user   0m0.107s
>> sys   0m54.693s
>>
>> /raid5volume $ time dd if=3D./bigfile.tmp of=3D/dev/null bs=3D1M
>> 8192+0 records in
>> 8192+0 records out
>> 8589934592 bytes (8.6 GB) copied, 37.8557 s, 227 MB/s
>>
>> real   0m37.861s
>> user   0m0.053s
>> sys   0m23.608s
>>
>> I saw that the process md0_raid5 spike sometimes on CPU usage. This =
is
>> an Atom @ 1.6GHz, is that what is limiting the results? Here's
>> bonnie++:
>>
>> /raid5volume/temp $ time bonnie++ -d ./ -m ion
>> Writing with putc()...done
>> Writing intelligently...done
>> Rewriting...done
>> Reading with getc()...done
>> Reading intelligently...done
>> start 'em...done...done...done...
>> Create files in sequential order...done.
>> Stat files in sequential order...done.
>> Delete files in sequential order...done.
>> Create files in random order...done.
>> Stat files in random order...done.
>> Delete files in random order...done.
>> Version 1.03e    ------Sequential Output------ --S=
equential Input- --Random-
>> Â Â Â Â Â Â Â Â Â Â =
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
>> Machine     Size K/sec %CP K/sec %CP K/sec %C=
P K/sec %CP K/sec %CP Â /sec %CP
>> ion        7G 13726  9=
8 148051 Â 87 68020 Â 41 14547 Â 99 286647
>> 61 404.1 Â 2
>> Â Â Â Â Â Â Â Â Â Â =
------Sequential Create------ --------Random Create--------
>> Â Â Â Â Â Â Â Â Â Â =
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>>        files  /sec %CP=
 /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec=
%CP
>> Â Â Â Â Â Â Â Â 16 20707 =C2=
=A099 +++++ +++ 25870 Â 99 21242 Â 98 +++++ +++ 25630 100
>> ion,7G,13726,98,148051,87,68020,41,14547,99,286647,61,404.1, 2,16,207=
07,99,+++++,+++,25870,99,21242,98,+++++,+++,25630,100
>>
>> real   20m54.320s
>> user   16m10.447s
>> sys   2m45.543s
>>
>>
>> Thanks in advance,
>> // Mathias
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.h=
tml
>>
>
>
>
> --
> Best regards,
> [COOLCOLD-RIPN]
>
I ran the benchmark found on the page (except for writes); results:
tripe_cache_size: 256 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 63.4351 s, 271 MB/s
stripe_cache_size: 256 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 59.8224 s, 287 MB/s
stripe_cache_size: 256 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.1066 s, 277 MB/s
stripe_cache_size: 512 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 59.6833 s, 288 MB/s
stripe_cache_size: 512 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 60.3497 s, 285 MB/s
stripe_cache_size: 512 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 59.7853 s, 287 MB/s
stripe_cache_size: 768 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 59.5398 s, 288 MB/s
stripe_cache_size: 768 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 60.1769 s, 285 MB/s
stripe_cache_size: 768 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 60.5354 s, 284 MB/s
stripe_cache_size: 1024 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 60.1814 s, 285 MB/s
stripe_cache_size: 1024 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.6288 s, 279 MB/s
stripe_cache_size: 1024 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.9942 s, 277 MB/s
stripe_cache_size: 2048 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.177 s, 281 MB/s
stripe_cache_size: 2048 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.3905 s, 280 MB/s
stripe_cache_size: 2048 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.0274 s, 281 MB/s
stripe_cache_size: 4096 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.607 s, 274 MB/s
stripe_cache_size: 4096 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 63.1505 s, 272 MB/s
stripe_cache_size: 4096 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.4747 s, 279 MB/s
stripe_cache_size: 8192 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.0839 s, 277 MB/s
stripe_cache_size: 8192 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.7944 s, 274 MB/s
stripe_cache_size: 8192 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.4443 s, 280 MB/s
stripe_cache_size: 16834 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.9554 s, 277 MB/s
stripe_cache_size: 16834 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 63.8002 s, 269 MB/s
stripe_cache_size: 16834 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.2772 s, 276 MB/s
stripe_cache_size: 32768 (1/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 62.4692 s, 275 MB/s
stripe_cache_size: 32768 (2/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 61.6707 s, 279 MB/s
stripe_cache_size: 32768 (3/3)
5460+0 records in
5460+0 records out
17175674880 bytes (17 GB) copied, 63.4744 s, 271 MB/s
It looks like a small stripe cache is favoured here.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 01:27:22 von mathias.buren
On 29 January 2011 23:57, Roman Mamedov wrote:
> On Sat, 29 Jan 2011 23:44:01 +0000
> Mathias Burén wrote:
>
>> Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
>> Â SCSI storage controller: HighPoint Technologies, Inc. RocketR=
AID
>> 230x 4 Port SATA-II Controller (rev 02)
>> Â Â host6: [Empty]
>> Â Â host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ80096=
4 }
>> Â Â host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA1000=
331}
>> Â Â host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ80085=
0 }
>
> Does this controller support PCI-E 2.0? I doubt it.
> Does you Atom mainboard support PCI-E 2.0? I highly doubt it.
> And if PCI-E 1.0/1.1 is used, these last 3 drives are limited to 250 =
MB/sec.
> in total, which in reality will be closer to 200 MB/sec.
>
>> It's all SATA 3Gbs. OK, so from what you're saying I should see
>> significantly better results on a better CPU? The HDDs should be abl=
e
>> to push 80MB/s (read or write), and that should yield at least 5*80 =
=3D
>> 400MB/s (-1 for parity) on easy (sequential?) reads.
>
> According to the hdparm benchmark, your CPU can not read faster than =
640
> MB/sec from _RAM_, and that's just plain easy linear data from a buff=
er. So it
> is perhaps not promising with regard to whether you will get 400MB/se=
c reading
> from RAID6 (with all the corresponding overheads) or not.
>
> --
> With respect,
> Roman
>
Ah, right. The Ion platform actually supports PCI-E 2.0, but the
controller I'm using doesn't, according to lspci, if I understand it
correctly. SATA ontroller:
05:00.0 SCSI storage controller: HighPoint Technologies, Inc.
RocketRAID 230x 4 Port SATA-II Controller (rev 02)
Subsystem: Marvell Technology Group Ltd. Device 11ab
[....]
Capabilities: [60] Express (v1) Legacy Endpoint, MSI 00
PCI-Express bridge:
00:18.0 PCI bridge: nVidia Corporation MCP79 PCI Express Bridge (rev
b1) (prog-if 00 [Normal decode])
Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx-
[...]
Capabilities: [80] Express (v2) Root Port (Slot+), MSI 00
That might explain why the different stripe caches didn't have any
effect either. Thanks for pointing that out, apparently I didn't think
about it when I purchased the (super cheap) card.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 01:33:48 von mathias.buren
On 30 January 2011 00:15, Stan Hoeppner wrote:
> Roman Mamedov put forth on 1/29/2011 5:57 PM:
>> On Sat, 29 Jan 2011 23:44:01 +0000
>> Mathias Burén wrote:
>>
>>> Controller device @ pci0000:00/0000:00:16.0/0000:05:00.0 [sata_mv]
>>> Â SCSI storage controller: HighPoint Technologies, Inc. Rocket=
RAID
>>> 230x 4 Port SATA-II Controller (rev 02)
>>> Â Â host6: [Empty]
>>> Â Â host7: /dev/sde ATA SAMSUNG HD204UI {SN: S2HGJ1RZ8009=
64 }
>>> Â Â host8: /dev/sdf ATA WDC WD20EARS-00M {SN: WD-WCAZA100=
0331}
>>> Â Â host9: /dev/sdg ATA SAMSUNG HD204UI {SN: S2HGJ1RZ8008=
50 }
>>
>> Does this controller support PCI-E 2.0? I doubt it.
>> Does you Atom mainboard support PCI-E 2.0? I highly doubt it.
>> And if PCI-E 1.0/1.1 is used, these last 3 drives are limited to 250=
MB/sec.
>> in total, which in reality will be closer to 200 MB/sec.
>>
>>> It's all SATA 3Gbs. OK, so from what you're saying I should see
>>> significantly better results on a better CPU? The HDDs should be ab=
le
>>> to push 80MB/s (read or write), and that should yield at least 5*80=
=3D
>>> 400MB/s (-1 for parity) on easy (sequential?) reads.
>>
>> According to the hdparm benchmark, your CPU can not read faster than=
640
>> MB/sec from _RAM_, and that's just plain easy linear data from a buf=
fer. So it
>> is perhaps not promising with regard to whether you will get 400MB/s=
ec reading
>> from RAID6 (with all the corresponding overheads) or not.
>
> It's also not promising given that 4 of his 6 drives are WDC-WD20EARS=
, which
> suck harder than a Dirt Devil at 240 volts, and the fact his 6 drives=
don't
> match. Â Sure, you say "Non matching drives are what software RAI=
D is for right?"
> Â Wrong, if you want best performance.
>
> About the only things that might give you a decent boost at this poin=
t are some
> EXT4 mount options in /etc/fstab: Â data=3Dwriteback,barrier=3D0
>
> The first eliminates strict write ordering. Â The second disables=
write barriers,
> so the drive's caches don't get flushed by Linux, and instead work as=
the
> firmware intends. Â The first of these is safe. Â The second =
may cause some
> additional data loss if writes are in flight when the power goes out =
or the
> kernel crashes. Â I'd recommend adding both to fstab, reboot and =
run your tests.
> Â Post the results here.
>
> If you have a decent UPS and auto shutdown software to down the syste=
m when the
> battery gets low during an outage, keep these settings if they yield
> substantially better performance.
>
> --
> Stan
>
Right. I wasn't using the writeback option. I won't disable barriers
as I've no UPS. I've seen the stripe=3D ext4 mount option, from
http://www.mjmwired.net/kernel/Documentation/filesystems/ext 4.txt :
287 stripe=3Dn Number of filesystem blocks that mballoc will try
288 to use for allocation size and alignment. For RAID5/6
289 systems this should be the number of data
290 disks * RAID chunk size in file system blocks.
I suppose in my case, number of data disks is 5, RAID chunk size is
64KB, file system block size is 4KB. This is on top of LVM, I don't
know how that affects the situation. So, the mount option would be
stripe=3D80? (5*64/4)
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 02:52:31 von Keld Simonsen
On Sat, Jan 29, 2011 at 11:44:01PM +0000, Mathias Bur=E9n wrote:
> On 29 January 2011 22:53, Roman Mamedov wrote:
> > On Sat, 29 Jan 2011 22:48:06 +0000
> > Mathias Bur=E9n wrote:
> >
> >> Hi,
> >>
> >> I'm wondering if the performance I'm getting is OK or if there's
> >> something I can do about it. Also, where the potential bottlenecks
> >> are.
> >
> > How are your disks plugged in? Which controller model(s), which bus=
> > But generally, on an Atom 1.6 Ghz those seem like good results.
> >
> > --
> > With respect,
> > Roman
> >
>=20
>=20
> Hi,
>=20
> Sorry, of course I should've included that. Here's the info:
>=20
> ~/bin $ sudo ./drivescan.sh
> Controller device @ pci0000:00/0000:00:0b.0 [ahci]
> SATA controller: nVidia Corporation MCP79 AHCI Controller (rev b1)
> host0: /dev/sda ATA Corsair CSSD-F60 {SN: 10326505580009990027}
> host1: /dev/sdb ATA WDC WD20EARS-00M {SN: WD-WCAZA1022443}
> host2: /dev/sdc ATA WDC WD20EARS-00M {SN: WD-WMAZ20152590}
> host3: /dev/sdd ATA WDC WD20EARS-00M {SN: WD-WMAZ20188479}
> host4: [Empty]
> host5: [Empty]
Hmm, it seems like you have 2 empty slots on the on-board SATA
controller. Try to move 2 of the disks from the other controller to the
on-board controller.
And I would also avoid LVM. I think LVM affects striping.
best regards
Keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 02:54:42 von mathias.buren
On 30 January 2011 01:52, Keld Jørn Simonsen wro=
te:
> On Sat, Jan 29, 2011 at 11:44:01PM +0000, Mathias Burén wrote:
>> On 29 January 2011 22:53, Roman Mamedov wrote:
>> > On Sat, 29 Jan 2011 22:48:06 +0000
>> > Mathias Burén wrote:
>> >
>> >> Hi,
>> >>
>> >> I'm wondering if the performance I'm getting is OK or if there's
>> >> something I can do about it. Also, where the potential bottleneck=
s
>> >> are.
>> >
>> > How are your disks plugged in? Which controller model(s), which bu=
s.
>> > But generally, on an Atom 1.6 Ghz those seem like good results.
>> >
>> > --
>> > With respect,
>> > Roman
>> >
>>
>>
>> Hi,
>>
>> Sorry, of course I should've included that. Here's the info:
>>
>> ~/bin $ sudo ./drivescan.sh
>> Controller device @ pci0000:00/0000:00:0b.0 [ahci]
>> Â SATA controller: nVidia Corporation MCP79 AHCI Controller (re=
v b1)
>> Â Â host0: /dev/sda ATA Corsair CSSD-F60 {SN: 103265055800=
09990027}
>> Â Â host1: /dev/sdb ATA WDC WD20EARS-00M {SN: WD-WCAZA1022=
443}
>> Â Â host2: /dev/sdc ATA WDC WD20EARS-00M {SN: WD-WMAZ20152=
590}
>> Â Â host3: /dev/sdd ATA WDC WD20EARS-00M {SN: WD-WMAZ20188=
479}
>> Â Â host4: [Empty]
>> Â Â host5: [Empty]
>
>
> Hmm, it seems like you have 2 empty slots on the on-board  SATA
> controller. Try to move 2 of the disks from the other controller to t=
he
> on-board controller.
>
> And I would also avoid LVM. I think LVM affects striping.
>
> best regards
> Keld
>
Sadly the 2 empty slots are not to be found on the motherboard, I
guess they're in the chipset only.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 05:44:44 von Roman Mamedov
--Sig_/09v96F5JO_YYRJCr7ex2Fwr
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sun, 30 Jan 2011 00:18:34 +0000
Mathias Burén wrote:
> I ran the benchmark found on the page (except for writes); results:
That's kinda unfortunate, as stripe_cache_size only (or mostly) affects wri=
tes.
--=20
With respect,
Roman
--Sig_/09v96F5JO_YYRJCr7ex2Fwr
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk1E7LwACgkQTLKSvz+PZwijXwCeLN7aMEyFVrSR5Go2pOeE QGli
ojIAnRKBpRBBdZbzYMhh+f0te7wb3R+D
=Ms3w
-----END PGP SIGNATURE-----
--Sig_/09v96F5JO_YYRJCr7ex2Fwr--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 06:56:16 von Keld Simonsen
On Sun, Jan 30, 2011 at 01:54:42AM +0000, Mathias Bur=E9n wrote:
> On 30 January 2011 01:52, Keld J=F8rn Simonsen wrot=
e:
> > On Sat, Jan 29, 2011 at 11:44:01PM +0000, Mathias Bur=E9n wrote:
> >> On 29 January 2011 22:53, Roman Mamedov wrote:
> >> > On Sat, 29 Jan 2011 22:48:06 +0000
> >> > Mathias Bur=E9n wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> I'm wondering if the performance I'm getting is OK or if there'=
s
> >> >> something I can do about it. Also, where the potential bottlene=
cks
> >> >> are.
> >> >
> >> > How are your disks plugged in? Which controller model(s), which =
bus.
> >> > But generally, on an Atom 1.6 Ghz those seem like good results.
> >> >
> >> > --
> >> > With respect,
> >> > Roman
> >> >
> >>
> >>
> >> Hi,
> >>
> >> Sorry, of course I should've included that. Here's the info:
> >>
> >> ~/bin $ sudo ./drivescan.sh
> >> Controller device @ pci0000:00/0000:00:0b.0 [ahci]
> >> =A0 SATA controller: nVidia Corporation MCP79 AHCI Controller (rev=
b1)
> >> =A0 =A0 host0: /dev/sda ATA Corsair CSSD-F60 {SN: 1032650558000999=
0027}
> >> =A0 =A0 host1: /dev/sdb ATA WDC WD20EARS-00M {SN: WD-WCAZA1022443}
> >> =A0 =A0 host2: /dev/sdc ATA WDC WD20EARS-00M {SN: WD-WMAZ20152590}
> >> =A0 =A0 host3: /dev/sdd ATA WDC WD20EARS-00M {SN: WD-WMAZ20188479}
> >> =A0 =A0 host4: [Empty]
> >> =A0 =A0 host5: [Empty]
> >
> >
> > Hmm, it seems like you have 2 empty slots on the on-board =A0SATA
> > controller. Try to move 2 of the disks from the other controller to=
the
> > on-board controller.
> >
> > And I would also avoid LVM. I think LVM affects striping.
> >
> > best regards
> > Keld
> >
>=20
> Sadly the 2 empty slots are not to be found on the motherboard, I
> guess they're in the chipset only.
maybe then use the 5th drive on the sata on-board controller in the
raid5 - the sda drive. If the raid5 is where you want performance from.=
=20
what do you use sda for? Your OS? It is a lot of space to use just for
the OS. It could easily go into the raid5 too. And you culd use a raid
for the system too, to secure you from bad things happening to your
system.
Or you could have a few 5 to 10 GB partitions in the beginning of
each drive, for experimenting with raid layout and performance.=20
This should be outside any LVM to exclude LVM having an impact on=20
the tests.=20
Maybe your PCI-e cannot do more than 2.5 Gbit - then 2 of your disks
would be enough to fill that connection. You could try out
a raid0 on the 3 drives. If you cannot get more than about 300 MB/s, th=
en=20
the PCI-E is a bottleneck.=20
If that is so, then having 3 drives from the PCI-E could slow down the
whole raid5, and using only 2 drives could speed up the full raid5.
The on-board sata controller is normally much faster, having a direct
connection to the southbridge - and typically a speed in the neighbourh=
ood of
20 Gbit - or 2500 MB/s - which would be enough for many systems to not
be the bottleneck. It can often pay off to have a motherboard with two
on-board sata controllers with in total 8 SATA ports or more,
instead of bying an extra PCI-E controller.
Looking forward to hear what you find out.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 13:09:02 von mathias.buren
On 30 January 2011 04:44, Roman Mamedov wrote:
> On Sun, 30 Jan 2011 00:18:34 +0000
> Mathias Burén wrote:
>
>> I ran the benchmark found on the page (except for writes); results:
>
> That's kinda unfortunate, as stripe_cache_size only (or mostly) affec=
ts writes.
>
> --
> With respect,
> Roman
>
Right, it's just that I don't want to destroy my data. I've ran a few
bonnie++ benchmarks with different mount options though. You can find
them here: http://stuff.dyndns.org/logs/bonnie_results.html
It actually looks like stripe=3D384 helped performance a bit. Currently
retrying the same mount options but with 32MB stripe cache instead of
8MB.
Then you have all the readahead settings as well, like:
blockdev --setra 8192 /dev/sd[abcdefgh]
blockdev --setra 65536 /dev/md0
And disabling NCQ;
for i in sdb sdc sdd sde sdf sdg; do echo 1 >
/sys/block/"$i"/device/queue_depth; done
I'll try those settings later.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 13:12:36 von mathias.buren
2011/1/30 Keld Jørn Simonsen :
> On Sun, Jan 30, 2011 at 01:54:42AM +0000, Mathias Burén wrote:
>> On 30 January 2011 01:52, Keld Jørn Simonsen =
wrote:
>> > On Sat, Jan 29, 2011 at 11:44:01PM +0000, Mathias Burén wrote=
:
>> >> On 29 January 2011 22:53, Roman Mamedov wrote:
>> >> > On Sat, 29 Jan 2011 22:48:06 +0000
>> >> > Mathias Burén wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> I'm wondering if the performance I'm getting is OK or if there=
's
>> >> >> something I can do about it. Also, where the potential bottlen=
ecks
>> >> >> are.
>> >> >
>> >> > How are your disks plugged in? Which controller model(s), which=
bus.
>> >> > But generally, on an Atom 1.6 Ghz those seem like good results.
>> >> >
>> >> > --
>> >> > With respect,
>> >> > Roman
>> >> >
>> >>
>> >>
>> >> Hi,
>> >>
>> >> Sorry, of course I should've included that. Here's the info:
>> >>
>> >> ~/bin $ sudo ./drivescan.sh
>> >> Controller device @ pci0000:00/0000:00:0b.0 [ahci]
>> >> Â SATA controller: nVidia Corporation MCP79 AHCI Controller =
(rev b1)
>> >> Â Â host0: /dev/sda ATA Corsair CSSD-F60 {SN: 103265055=
80009990027}
>> >> Â Â host1: /dev/sdb ATA WDC WD20EARS-00M {SN: WD-WCAZA1=
022443}
>> >> Â Â host2: /dev/sdc ATA WDC WD20EARS-00M {SN: WD-WMAZ20=
152590}
>> >> Â Â host3: /dev/sdd ATA WDC WD20EARS-00M {SN: WD-WMAZ20=
188479}
>> >> Â Â host4: [Empty]
>> >> Â Â host5: [Empty]
>> >
>> >
>> > Hmm, it seems like you have 2 empty slots on the on-board  SA=
TA
>> > controller. Try to move 2 of the disks from the other controller t=
o the
>> > on-board controller.
>> >
>> > And I would also avoid LVM. I think LVM affects striping.
>> >
>> > best regards
>> > Keld
>> >
>>
>> Sadly the 2 empty slots are not to be found on the motherboard, I
>> guess they're in the chipset only.
>
> maybe then use the 5th drive on the sata on-board controller in the
> raid5 - the sda drive. If the raid5 is where you want performance fro=
m.
>
> what do you use sda for? Your OS? It is a lot of space to use just fo=
r
> the OS. It could easily go into the raid5 too. And you culd use a rai=
d
> for the system too, to secure you from bad things happening to your
> system.
>
> Or you could have a few 5 to 10 GB partitions in the beginning of
> each drive, for experimenting with raid layout and performance.
> This should be outside any LVM to exclude LVM having an impact on
> the tests.
>
> Maybe your PCI-e cannot do more than 2.5 Gbit - then 2 of your disks
> would be enough to fill that connection. You could try out
> a raid0 on the 3 drives. If you cannot get more than about 300 MB/s, =
then
> the PCI-E is a bottleneck.
>
> If that is so, then having 3 drives from the PCI-E could slow down th=
e
> whole raid5, and using only 2 drives could speed up the full raid5.
>
> The on-board sata controller is normally much faster, having a direct
> connection to the southbridge - and typically a speed in the neighbou=
rhood of
> 20 Gbit - or 2500 MB/s - which would be enough for many systems to no=
t
> be the bottleneck. It can often pay off to have a motherboard with tw=
o
> on-board sata controllers with in total 8 SATA ports or more,
> instead of bying an extra PCI-E controller.
>
> Looking forward to hear what you find out.
>
> best regards
> keld
>
Ah, good point. The sda is a 60GB SSD, I should definitely move that
to the PCI-E card, as it doesn't do heavy IO (just small random r/w).
Then i can have 4 RAID HDDs on the onboard ctrl, and 2 on the PCI-E
shared with the SSD. If the SSD is idle then I should get ideal
throughputs.
Thanks to mdadm, and /dev/disks/by-label/ I should be fine with just
swapping the SATA cables around actually, without having to change any
configs. I'll try that later on and let you know if it affects
performance anything. Good catch!
I'm not prepared to mess around with partitions on the RAID drives, as
I have data on them I wish to keep.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 13:15:33 von Roman Mamedov
--Sig_/0nfrZggSu1Z/NTmlEuQ9RUi
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sun, 30 Jan 2011 12:09:02 +0000
Mathias Burén wrote:
> Right, it's just that I don't want to destroy my data. I've ran a few
> bonnie++ benchmarks with different mount options though. You can find
> them here: http://stuff.dyndns.org/logs/bonnie_results.html
> It actually looks like stripe=3D384 helped performance a bit. Currently
> retrying the same mount options but with 32MB stripe cache instead of
> 8MB.
Be aware that it's not just 32MB of RAM, it's
=20
"stripe_cache_size * 4096 (page size) * number of disks".
In other words on 6 disks this stripe cache will consume 768 MB of RAM.
--=20
With respect,
Roman
--Sig_/0nfrZggSu1Z/NTmlEuQ9RUi
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk1FVmUACgkQTLKSvz+PZwjLqQCcCG1k2pJP5dBxc7QMb8sk mKSf
S1kAmweHTyf9pbjFhSZf81RHpEwCllFG
=lNFz
-----END PGP SIGNATURE-----
--Sig_/0nfrZggSu1Z/NTmlEuQ9RUi--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 20:41:47 von mathias.buren
On 30 January 2011 12:15, Roman Mamedov wrote:
> On Sun, 30 Jan 2011 12:09:02 +0000
> Mathias Burén wrote:
>
>> Right, it's just that I don't want to destroy my data. I've ran a fe=
w
>> bonnie++ benchmarks with different mount options though. You can fin=
d
>> them here: http://stuff.dyndns.org/logs/bonnie_results.html
>> It actually looks like stripe=3D384 helped performance a bit. Curren=
tly
>> retrying the same mount options but with 32MB stripe cache instead o=
f
>> 8MB.
>
> Be aware that it's not just 32MB of RAM, it's
>
> Â "stripe_cache_size * 4096 (page size) * number of disks".
>
> In other words on 6 disks this stripe cache will consume 768 MB of RA=
M.
>
> --
> With respect,
> Roman
>
Thanks. New results up for those interested:
http://stuff.dyndns.org/logs/bonnie_results.html
It looks like the best performance (for me) is gained using
rw,noatime,nouser_xattr,data=3Dwriteback,stripe=3D384 , 8192
stripe_cache_size, NCQ turned on (31), md0 readahead of 65536. I did
switch place between the SSD and a HDD so there's only 2 HDDs on the
PCI-E controller now, that are part of the RAID. The other 4 are on
the internal SATA controller.
csv format:
"Host","Chunk size","Sequential Output",,,,,,"Sequential
Input",,,,"Random Seeks",,"Files","Sequential Create",,,,,,"Random
Create",,,,,,"Mount options","stripe_cache_size","NCQ","md0
readahead","Comment"
,,"Per Chr",,"Block",,"Rewrite",,"Per
Chr",,"Block",,"Seeks",,,"Create",,"Read",,"Delete",,"Create ",,"Read",,=
"Delete",,,,,,
,,"K/sec","CPU","K/sec","CPU","K/sec","CPU","K/sec","CPU","K /sec","CPU"=
,"/sec","CPU",,"/sec","CPU","/sec","CPU","/sec","CPU","/sec" ,"CPU","/se=
c","CPU","/sec","CPU",,,,,
"ion","7G",13916,98,158905,91,71718,39,14539,99,295079,57,48 2.8,3,16,20=
747,99,"+++++","+++",24077,93,21249,99,"+++++","+++",25854,9 9,"rw,noati=
me,nouser_xattr,data=3Dwriteback,stripe=3D384",8192,31,65536 ,"4
on host 2 on pci-e"
A bit messy.
I did find another PCI-E SATA controller that is generation 2.0, and
looks like it may do the trick. This one:
http://www.newegg.com/Product/Product.aspx?Item=3DN82E168161 15053
"HighPoint RocketRAID 2640x1 PCI-Express x1 Four-Port SATA and SAS
RAID Controller Card".
It's ~â110 on eBay, a bit hefty. I might just save up and build=
a NAS
box from scratch, with some mainboard which has 8 SATA from the start
etc.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 20:44:04 von Stan Hoeppner
Mathias Bur=E9n put forth on 1/30/2011 6:12 AM:
> Thanks to mdadm, and /dev/disks/by-label/ I should be fine with just
> swapping the SATA cables around actually, without having to change an=
y
> configs. I'll try that later on and let you know if it affects
> performance anything. Good catch!
Be sure to change the mobo and PCIe card BIOS boot order after moving t=
he SSD
cable to the PCIe card or the machine won't boot. Also, before swappin=
g cables,
make sure you have the PCIe card chipset driver built into your kernel =
or
properly built into your initrd image. If not you still won't boot.
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 20:46:46 von mathias.buren
On 30 January 2011 19:44, Stan Hoeppner wrote:
> Mathias Burén put forth on 1/30/2011 6:12 AM:
>
>> Thanks to mdadm, and /dev/disks/by-label/ I should be fine with just
>> swapping the SATA cables around actually, without having to change a=
ny
>> configs. I'll try that later on and let you know if it affects
>> performance anything. Good catch!
>
> Be sure to change the mobo and PCIe card BIOS boot order after moving=
the SSD
> cable to the PCIe card or the machine won't boot. Â Also, before =
swapping cables,
> make sure you have the PCIe card chipset driver built into your kerne=
l or
> properly built into your initrd image. Â If not you still won't b=
oot.
>
> --
> Stan
>
I forgot to mention it, but I already swapped the cables. I had to
change a few things (reinstall GRUB, change menu.lst) but otherwise I
was OK. Strange thing is that my mainboard doesn't seem to want to
boot off a drive that's connected to the Highpoint controller. Just
sits there. However, using GRUB from a USB stick enables me to just do
root (hd2,0), then chainload + and load the GRUB of the SSD. Not
elegant, but it works (for now). Will investigate more later...
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 20:54:44 von Roman Mamedov
--Sig_/an2+EAEsNjDHswgHQGR3G6f
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On Sun, 30 Jan 2011 19:41:47 +0000
Mathias Burén wrote:
> I did find another PCI-E SATA controller that is generation 2.0, and
> looks like it may do the trick. This one:
> http://www.newegg.com/Product/Product.aspx?Item=3DN82E168161 15053
> "HighPoint RocketRAID 2640x1 PCI-Express x1 Four-Port SATA and SAS
> RAID Controller Card".
>=20
> It's ~â110 on eBay, a bit hefty. I might just save up and build a=
NAS
> box from scratch, with some mainboard which has 8 SATA from the start
> etc.
This one is cheaper:
http://www.dealextreme.com/p/lsi-sas3041e-r-4-port-sas-sata- host-bus-adapte=
r-51317
Doesn't matter if it's v1.x or v2.0, since it's x4, it will have enough
bandwidth either way.
--=20
With respect,
Roman
--Sig_/an2+EAEsNjDHswgHQGR3G6f
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk1FwgQACgkQTLKSvz+PZwjVHQCfT9iRrk02DFgZmC7hVfN6 JGee
2LEAn3jcE3nG40ei2I65OtUon/UxikIo
=eBfi
-----END PGP SIGNATURE-----
--Sig_/an2+EAEsNjDHswgHQGR3G6f--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 20:58:24 von mathias.buren
On 30 January 2011 19:54, Roman Mamedov wrote:
> On Sun, 30 Jan 2011 19:41:47 +0000
> Mathias Burén wrote:
>
>> I did find another PCI-E SATA controller that is generation 2.0, and
>> looks like it may do the trick. This one:
>> http://www.newegg.com/Product/Product.aspx?Item=3DN82E168161 15053
>> "HighPoint RocketRAID 2640x1 PCI-Express x1 Four-Port SATA and SAS
>> RAID Controller Card".
>>
>> It's ~â110 on eBay, a bit hefty. I might just save up and bu=
ild a NAS
>> box from scratch, with some mainboard which has 8 SATA from the star=
t
>> etc.
>
> This one is cheaper:
> http://www.dealextreme.com/p/lsi-sas3041e-r-4-port-sas-sata- host-bus-=
adapter-51317
> Doesn't matter if it's v1.x or v2.0, since it's x4, it will have enou=
gh
> bandwidth either way.
>
> --
> With respect,
> Roman
>
Sorry, my current mainboard (it's a Zotac Ion/Atom ITX) only has one
x1 connector (2.0), hence my limited choice of cards.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 21:03:06 von Stan Hoeppner
Mathias Bur=E9n put forth on 1/30/2011 1:41 PM:
> Thanks. New results up for those interested:
> http://stuff.dyndns.org/logs/bonnie_results.html
Three things I notice:
1. You're CPU bound across the board, for all the tests that matter an=
yway
2. Because of this, your performance spread is less than 5% across the=
board
meaning any optimizations are useless
3. Is that a 7 Gigabyte chunk size? That's totally unrealistic. It s=
hould be
less than 1 MB for almost all workloads.
According to those numbers, you can swap SATA controllers and PCIe bus =
slot
assignments all day long, but you'll gain nothing without a faster CPU.
Why are you using a 7 Gigabyte chunk size? And if the other OP was cor=
rect
about the 768MB stripe cache, that's totally unrealistic as well. And =
in real
world use, you don't want a high readahead setting. It just wastes buf=
fer cache
memory for no gain (except maybe in some synthetic benchmarks).
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 30.01.2011 22:43:02 von mathias.buren
On 30 January 2011 20:03, Stan Hoeppner wrote:
> Mathias Burén put forth on 1/30/2011 1:41 PM:
>
>> Thanks. New results up for those interested:
>> http://stuff.dyndns.org/logs/bonnie_results.html
>
> Three things I notice:
>
> 1. Â You're CPU bound across the board, for all the tests that ma=
tter anyway
> 2. Â Because of this, your performance spread is less than 5% acr=
oss the board
> Â Â meaning any optimizations are useless
> 3. Â Is that a 7 Gigabyte chunk size? Â That's totally unreal=
istic. Â It should be
> Â Â less than 1 MB for almost all workloads.
>
> According to those numbers, you can swap SATA controllers and PCIe bu=
s slot
> assignments all day long, but you'll gain nothing without a faster CP=
U.
>
> Why are you using a 7 Gigabyte chunk size? Â And if the other OP =
was correct
> about the 768MB stripe cache, that's totally unrealistic as well. =C2=
=A0And in real
> world use, you don't want a high readahead setting. Â It just was=
tes buffer cache
> memory for no gain (except maybe in some synthetic benchmarks).
>
> --
> Stan
>
CPU bound, got it.
3) 7GB output from bonnie++. Like so:
$ time bonnie++ -m ion -d ./
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03e ------Sequential Output------ --Sequential Input- -=
-Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- -=
-Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP =
/sec %CP
ion 7G 13455 96 150827 92 67719 43 14439 99 271832
59 469.6 3
------Sequential Create------ --------Random Create=
--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -=
Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP =
/sec %CP
16 20318 99 +++++ +++ 25428 99 21095 99 +++++ +++ 2=
5278 99
ion,7G,13455,96,150827,92,67719,43,14439,99,271832,59,469.6, 3,16,20318,=
99,+++++,+++,25428,99,21095,99,+++++,+++,25278,99
real 21m7.872s
user 16m17.836s
sys 2m51.405s
(after ion)
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 04:39:39 von Stan Hoeppner
Mathias Bur=E9n put forth on 1/30/2011 3:43 PM:
> 3) 7GB output from bonnie++. Like so:
The column in your graph http://stuff.dyndns.org/logs/bonnie_results.ht=
ml says
"Chunk size". I'm not a bonnie++ user so my apologies for not being fa=
miliar
with it. I use iozone instead (on rare occasion) which I like much bet=
ter.
--=20
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 04:54:11 von Roberto Spadim
try stress program, and sysstat
it make a nice stress test over filesystem (maybe you will use it,
only if using database over partition or something familiar)
2011/1/31 Stan Hoeppner :
> Mathias Bur=E9n put forth on 1/30/2011 3:43 PM:
>
>> 3) 7GB output from bonnie++. Like so:
>
> The column in your graph http://stuff.dyndns.org/logs/bonnie_results.=
html says
> "Chunk size". =A0I'm not a bonnie++ user so my apologies for not bein=
g familiar
> with it. =A0I use iozone instead (on rare occasion) which I like much=
better.
>
> --
> Stan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 09:52:02 von Keld Simonsen
If your intallation is CPU bound, and you are
using an Atom N270 processor or the like, well some ideas:
The Atom CPU may have threading, so you could run 2 RAIDs
which then probably would run in each thread.
It would cost you 1 more disk if you run 2 RAID5's
so you get 8 TB payload out of your 12 GB total (6 drives of 2 TB each).
Another way to get better performance could be to use less
CPU-intensitive RAID types. RAID5 is intensitive as it needs to
calculate XOR information all the time. Maybe a mirrored
raid type like RAID10,f2 would give you less CPU usage,
and the run 2 RAIDS to have it running in both hyperthreads.
Here you would then only get 6 TB payload of your 12 GB disks,
but then also probably a faster system.
Best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 10:37:46 von mathias.buren
On 31 January 2011 08:52, Keld Jørn Simonsen wro=
te:
> If your intallation is CPU bound, and you are
> using an Atom N270 processor or the like, well some ideas:
>
> The Atom CPU may have threading, so you could run 2 RAIDs
> which then probably would run in each thread.
> It would cost you 1 more disk if you run 2 RAID5's
> so you get 8 TB payload out of your 12 GB total (6 drives of 2 TB eac=
h).
>
> Another way to get better performance could be to use less
> CPU-intensitive RAID types. RAID5 is intensitive as it needs to
> calculate XOR information all the time. Maybe a mirrored
> raid type like RAID10,f2 would give you less CPU usage,
> and the run 2 RAIDS to have it running in both hyperthreads.
> Here you would then only get 6 TB payload of your 12 GB disks,
> but then also probably a faster system.
>
> Best regards
> keld
>
Hi,
It's interesting what you say about the XOR calculations. I thought
that it was only calculated on writes? The Atom (330) has HT, so Linux
sees 4 logical CPUs.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 14:11:31 von Keld Simonsen
On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
> On 31 January 2011 08:52, Keld J=F8rn Simonsen wrot=
e:
> > If your intallation is CPU bound, and you are
> > using an Atom N270 processor or the like, well some ideas:
> >
> > The Atom CPU may have threading, so you could run 2 RAIDs
> > which then probably would run in each thread.
> > It would cost you 1 more disk if you run 2 RAID5's
> > so you get 8 TB payload out of your 12 GB total (6 drives of 2 TB e=
ach).
> >
> > Another way to get better performance could be to use less
> > CPU-intensitive RAID types. RAID5 is intensitive as it needs to
> > calculate XOR information all the time. Maybe a mirrored
> > raid type like RAID10,f2 would give you less CPU usage,
> > and the run 2 RAIDS to have it running in both hyperthreads.
> > Here you would then only get 6 TB payload of your 12 GB disks,
> > but then also probably a faster system.
> >
> > Best regards
> > keld
> >
>=20
> Hi,
>=20
> It's interesting what you say about the XOR calculations. I thought
> that it was only calculated on writes? The Atom (330) has HT, so Linu=
x
> sees 4 logical CPUs.
Yes you are right, it only calculates XOR on writes with RAID5.=20
But then I am puzzled what all these CPU cycles are used for.
Also many cycles are used on mirrored raid types. Why?
Maybe some is because of LVM? I have been puzzled for a long time why
ordinary RAID without LVM need to use so much CPU. Maybe a lot of data
sguffling between buffers? Neil?
Best regards
Keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 15:43:56 von Roberto Spadim
i think it's cpu wait i/o
2011/1/31 Keld J=F8rn Simonsen :
> On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
>> On 31 January 2011 08:52, Keld J=F8rn Simonsen wro=
te:
>> > If your intallation is CPU bound, and you are
>> > using an Atom N270 processor or the like, well some ideas:
>> >
>> > The Atom CPU may have threading, so you could run 2 RAIDs
>> > which then probably would run in each thread.
>> > It would cost you 1 more disk if you run 2 RAID5's
>> > so you get 8 TB payload out of your 12 GB total (6 drives of 2 TB =
each).
>> >
>> > Another way to get better performance could be to use less
>> > CPU-intensitive RAID types. RAID5 is intensitive as it needs to
>> > calculate XOR information all the time. Maybe a mirrored
>> > raid type like RAID10,f2 would give you less CPU usage,
>> > and the run 2 RAIDS to have it running in both hyperthreads.
>> > Here you would then only get 6 TB payload of your 12 GB disks,
>> > but then also probably a faster system.
>> >
>> > Best regards
>> > keld
>> >
>>
>> Hi,
>>
>> It's interesting what you say about the XOR calculations. I thought
>> that it was only calculated on writes? The Atom (330) has HT, so Lin=
ux
>> sees 4 logical CPUs.
>
> Yes you are right, it only calculates XOR on writes with RAID5.
> But then I am puzzled what all these CPU cycles are used for.
> Also many cycles are used on mirrored raid types. Why?
> Maybe some is because of LVM? I have been puzzled for a long time why
> ordinary RAID without LVM need to use so much CPU. Maybe a lot of dat=
a
> sguffling between buffers? Neil?
>
> Best regards
> Keld
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 19:44:17 von Keld Simonsen
On Mon, Jan 31, 2011 at 12:43:56PM -0200, Roberto Spadim wrote:
> i think it's cpu wait i/o
Well, you better be sure. Please have a closer look.
Normally saying that a process is CPU bound means that
it is not IO bound.
Best regards
keld
> 2011/1/31 Keld J=F8rn Simonsen :
> > On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
> >> On 31 January 2011 08:52, Keld J=F8rn Simonsen w=
rote:
> >> > If your intallation is CPU bound, and you are
> >> > using an Atom N270 processor or the like, well some ideas:
> >> >
> >> > The Atom CPU may have threading, so you could run 2 RAIDs
> >> > which then probably would run in each thread.
> >> > It would cost you 1 more disk if you run 2 RAID5's
> >> > so you get 8 TB payload out of your 12 GB total (6 drives of 2 T=
B each).
> >> >
> >> > Another way to get better performance could be to use less
> >> > CPU-intensitive RAID types. RAID5 is intensitive as it needs to
> >> > calculate XOR information all the time. Maybe a mirrored
> >> > raid type like RAID10,f2 would give you less CPU usage,
> >> > and the run 2 RAIDS to have it running in both hyperthreads.
> >> > Here you would then only get 6 TB payload of your 12 GB disks,
> >> > but then also probably a faster system.
> >> >
> >> > Best regards
> >> > keld
> >> >
> >>
> >> Hi,
> >>
> >> It's interesting what you say about the XOR calculations. I though=
t
> >> that it was only calculated on writes? The Atom (330) has HT, so L=
inux
> >> sees 4 logical CPUs.
> >
> > Yes you are right, it only calculates XOR on writes with RAID5.
> > But then I am puzzled what all these CPU cycles are used for.
> > Also many cycles are used on mirrored raid types. Why?
> > Maybe some is because of LVM? I have been puzzled for a long time w=
hy
> > ordinary RAID without LVM need to use so much CPU. Maybe a lot of d=
ata
> > sguffling between buffers? Neil?
> >
> > Best regards
> > Keld
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-rai=
d" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at =A0http://vger.kernel.org/majordomo-info.htm=
l
> >
>=20
>=20
>=20
> --=20
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 21:42:57 von NeilBrown
On Mon, 31 Jan 2011 14:11:31 +0100 Keld J=F8rn Simonsen
m> wrote:
> On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
> > On 31 January 2011 08:52, Keld J=F8rn Simonsen wr=
ote:
> > > If your intallation is CPU bound, and you are
> > > using an Atom N270 processor or the like, well some ideas:
> > >
> > > The Atom CPU may have threading, so you could run 2 RAIDs
> > > which then probably would run in each thread.
> > > It would cost you 1 more disk if you run 2 RAID5's
> > > so you get 8 TB payload out of your 12 GB total (6 drives of 2 TB=
each).
> > >
> > > Another way to get better performance could be to use less
> > > CPU-intensitive RAID types. RAID5 is intensitive as it needs to
> > > calculate XOR information all the time. Maybe a mirrored
> > > raid type like RAID10,f2 would give you less CPU usage,
> > > and the run 2 RAIDS to have it running in both hyperthreads.
> > > Here you would then only get 6 TB payload of your 12 GB disks,
> > > but then also probably a faster system.
> > >
> > > Best regards
> > > keld
> > >
> >=20
> > Hi,
> >=20
> > It's interesting what you say about the XOR calculations. I thought
> > that it was only calculated on writes? The Atom (330) has HT, so Li=
nux
> > sees 4 logical CPUs.
>=20
> Yes you are right, it only calculates XOR on writes with RAID5.=20
> But then I am puzzled what all these CPU cycles are used for.
> Also many cycles are used on mirrored raid types. Why?
> Maybe some is because of LVM? I have been puzzled for a long time why
> ordinary RAID without LVM need to use so much CPU. Maybe a lot of dat=
a
> sguffling between buffers? Neil?
What is your evidence that RAID1 uses lots of CPU?
I would expect it to use very little, but I've been wrong before.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 22:41:13 von Keld Simonsen
On Tue, Feb 01, 2011 at 07:42:57AM +1100, NeilBrown wrote:
> On Mon, 31 Jan 2011 14:11:31 +0100 Keld J=F8rn Simonsen
com> wrote:
>=20
> > On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
> > > On 31 January 2011 08:52, Keld J=F8rn Simonsen =
wrote:
> > > > If your intallation is CPU bound, and you are
> > > > using an Atom N270 processor or the like, well some ideas:
> > > >
> > > > The Atom CPU may have threading, so you could run 2 RAIDs
> > > > which then probably would run in each thread.
> > > > It would cost you 1 more disk if you run 2 RAID5's
> > > > so you get 8 TB payload out of your 12 GB total (6 drives of 2 =
TB each).
> > > >
> > > > Another way to get better performance could be to use less
> > > > CPU-intensitive RAID types. RAID5 is intensitive as it needs to
> > > > calculate XOR information all the time. Maybe a mirrored
> > > > raid type like RAID10,f2 would give you less CPU usage,
> > > > and the run 2 RAIDS to have it running in both hyperthreads.
> > > > Here you would then only get 6 TB payload of your 12 GB disks,
> > > > but then also probably a faster system.
> > > >
> > > > Best regards
> > > > keld
> > > >
> > >=20
> > > Hi,
> > >=20
> > > It's interesting what you say about the XOR calculations. I thoug=
ht
> > > that it was only calculated on writes? The Atom (330) has HT, so =
Linux
> > > sees 4 logical CPUs.
> >=20
> > Yes you are right, it only calculates XOR on writes with RAID5.=20
> > But then I am puzzled what all these CPU cycles are used for.
> > Also many cycles are used on mirrored raid types. Why?
> > Maybe some is because of LVM? I have been puzzled for a long time w=
hy
> > ordinary RAID without LVM need to use so much CPU. Maybe a lot of d=
ata
> > sguffling between buffers? Neil?
>=20
> What is your evidence that RAID1 uses lots of CPU?
Much of this is raid10, but it should be the same:
http://home.comcast.net/~jpiszcz/20080329-raid/
http://home.comcast.net/~jpiszcz/raid/20080528/raid-levels.h tml
It seems like cpu usage is rather proportionate to the IO done.
And the CPU usage does get up to about 40 % for reading, and
45 % for writing - this is most likely a significant delay factor.
=46or slower CPUs like the Atom CPU this may be an even more significan=
t
delay factor.
=46or RAID5 the situation is even worse, as expected.
best regards
keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 31.01.2011 22:43:02 von Roberto Spadim
nice, but raid1 is not a very cpu consuming (a filesystem can use more
cpu than a raid implementation...) a browser (firefox) too
i think raid1 source code is well otimized for cpu and memory, maybe
you need a faster cpu and not a less cpu consuming software, maybe a
hardware raid could help you...
2011/1/31 Keld J=F8rn Simonsen :
> On Tue, Feb 01, 2011 at 07:42:57AM +1100, NeilBrown wrote:
>> On Mon, 31 Jan 2011 14:11:31 +0100 Keld J=F8rn Simonsen
com> wrote:
>>
>> > On Mon, Jan 31, 2011 at 09:37:46AM +0000, Mathias Bur=E9n wrote:
>> > > On 31 January 2011 08:52, Keld J=F8rn Simonsen =
wrote:
>> > > > If your intallation is CPU bound, and you are
>> > > > using an Atom N270 processor or the like, well some ideas:
>> > > >
>> > > > The Atom CPU may have threading, so you could run 2 RAIDs
>> > > > which then probably would run in each thread.
>> > > > It would cost you 1 more disk if you run 2 RAID5's
>> > > > so you get 8 TB payload out of your 12 GB total (6 drives of 2=
TB each).
>> > > >
>> > > > Another way to get better performance could be to use less
>> > > > CPU-intensitive RAID types. RAID5 is intensitive as it needs t=
o
>> > > > calculate XOR information all the time. Maybe a mirrored
>> > > > raid type like RAID10,f2 would give you less CPU usage,
>> > > > and the run 2 RAIDS to have it running in both hyperthreads.
>> > > > Here you would then only get 6 TB payload of your 12 GB disks,
>> > > > but then also probably a faster system.
>> > > >
>> > > > Best regards
>> > > > keld
>> > > >
>> > >
>> > > Hi,
>> > >
>> > > It's interesting what you say about the XOR calculations. I thou=
ght
>> > > that it was only calculated on writes? The Atom (330) has HT, so=
Linux
>> > > sees 4 logical CPUs.
>> >
>> > Yes you are right, it only calculates XOR on writes with RAID5.
>> > But then I am puzzled what all these CPU cycles are used for.
>> > Also many cycles are used on mirrored raid types. Why?
>> > Maybe some is because of LVM? I have been puzzled for a long time =
why
>> > ordinary RAID without LVM need to use so much CPU. Maybe a lot of =
data
>> > sguffling between buffers? Neil?
>>
>> What is your evidence that RAID1 uses lots of CPU?
>
> Much of this is raid10, but it should be the same:
> http://home.comcast.net/~jpiszcz/20080329-raid/
> http://home.comcast.net/~jpiszcz/raid/20080528/raid-levels.h tml
>
> It seems like cpu usage is rather proportionate to the IO done.
> And the CPU usage does get up to about 40 % for reading, and
> 45 % for writing - this is most likely a significant delay factor.
> For slower CPUs like the Atom CPU this may be an even more significan=
t
> delay factor.
>
> For RAID5 the situation is even worse, as expected.
>
> best regards
> keld
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 01.02.2011 12:37:42 von John Robinson
On 30/01/2011 12:12, Mathias Burén wrote:
[...]
> Ah, good point. The sda is a 60GB SSD, I should definitely move that
> to the PCI-E card, as it doesn't do heavy IO (just small random r/w).
> Then i can have 4 RAID HDDs on the onboard ctrl, and 2 on the PCI-E
> shared with the SSD. If the SSD is idle then I should get ideal
> throughputs.
Hmm, if you have an SSD, you might look at using bcache to speed up=20
write access to your array. On the other hand, with only one SSD you=20
potentially lose redundancy - do SSDs crash and burn like hard drives d=
o?
If the SSD is only being used as a boot/OS drive - so near idle in=20
normal use - I'd swap it for a cheap small laptop hard drive and find=20
somewhere else to put the SSD to better use.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 01.02.2011 14:53:25 von Roberto Spadim
if money isn=B4t a problem, buy only ssd =3D) they have ecc, faster
read/write, less latency, and a bigger MTBF (they are better!) and
they broken too, replace every 5 years to don=B4t have problems with
loose of information
2011/2/1 John Robinson :
> On 30/01/2011 12:12, Mathias Bur=E9n wrote:
> [...]
>>
>> Ah, good point. The sda is a 60GB SSD, I should definitely move that
>> to the PCI-E card, as it doesn't do heavy IO (just small random r/w)=
>> Then i can have 4 RAID HDDs on the onboard ctrl, and 2 on the PCI-E
>> shared with the SSD. If the SSD is idle then I should get ideal
>> throughputs.
>
> Hmm, if you have an SSD, you might look at using bcache to speed up w=
rite
> access to your array. On the other hand, with only one SSD you potent=
ially
> lose redundancy - do SSDs crash and burn like hard drives do?
>
> If the SSD is only being used as a boot/OS drive - so near idle in no=
rmal
> use - I'd swap it for a cheap small laptop hard drive and find somewh=
ere
> else to put the SSD to better use.
>
> Cheers,
>
> John.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 01.02.2011 15:02:45 von mathias.buren
On 1 February 2011 11:37, John Robinson
> wrote:
> On 30/01/2011 12:12, Mathias Burén wrote:
> [...]
>>
>> Ah, good point. The sda is a 60GB SSD, I should definitely move that
>> to the PCI-E card, as it doesn't do heavy IO (just small random r/w)=
>> Then i can have 4 RAID HDDs on the onboard ctrl, and 2 on the PCI-E
>> shared with the SSD. If the SSD is idle then I should get ideal
>> throughputs.
>
> Hmm, if you have an SSD, you might look at using bcache to speed up w=
rite
> access to your array. On the other hand, with only one SSD you potent=
ially
> lose redundancy - do SSDs crash and burn like hard drives do?
>
> If the SSD is only being used as a boot/OS drive - so near idle in no=
rmal
> use - I'd swap it for a cheap small laptop hard drive and find somewh=
ere
> else to put the SSD to better use.
>
> Cheers,
>
> John.
>
>
Thanks for the input. My initial question was basically, "where's my
bottleneck". The SSD is hosting the OS + an XBMC database :-) so I
can't use it for anything else.
// Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Performance question, RAID5
am 01.02.2011 15:32:26 von Roberto Spadim
nice, with my ocz vertex2 tests i get this:
sata1 can get about 130MB/s (sata1 =3D 1,5gb/s)
sata2 can get about 270MB/s (sata1 =3D 3gb/s)
i don=B4t remember but i think that each =B4x=B4 on pci-express =3D 2.5=
gb/s, 4x=3D10gb/s
onboard sata controllers are pci-express or pci based board (it=B4s har=
d
linked in north bridge)
2011/2/1 Mathias Bur=E9n :
> On 1 February 2011 11:37, John Robinson
uk> wrote:
>> On 30/01/2011 12:12, Mathias Bur=E9n wrote:
>> [...]
>>>
>>> Ah, good point. The sda is a 60GB SSD, I should definitely move tha=
t
>>> to the PCI-E card, as it doesn't do heavy IO (just small random r/w=
).
>>> Then i can have 4 RAID HDDs on the onboard ctrl, and 2 on the PCI-E
>>> shared with the SSD. If the SSD is idle then I should get ideal
>>> throughputs.
>>
>> Hmm, if you have an SSD, you might look at using bcache to speed up =
write
>> access to your array. On the other hand, with only one SSD you poten=
tially
>> lose redundancy - do SSDs crash and burn like hard drives do?
>>
>> If the SSD is only being used as a boot/OS drive - so near idle in n=
ormal
>> use - I'd swap it for a cheap small laptop hard drive and find somew=
here
>> else to put the SSD to better use.
>>
>> Cheers,
>>
>> John.
>>
>>
>
> Thanks for the input. My initial question was basically, "where's my
> bottleneck". The SSD is hosting the OS + an XBMC database :-) so I
> can't use it for anything else.
>
> // Mathias
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html