RAID 1 using SSD and 2 HDD
RAID 1 using SSD and 2 HDD
am 19.07.2011 20:15:22 von Mike Power
Is it possible to implement a RAID 1 array using two equal size HDD and
one smaller and faster SSD. The idea being that the resulting RAID
would have the same size of the HDD while picking up the speed benefits
of the SSD. You can do something similar today by just buying two
hybrid drives and putting them in an array. By purchasing a dedicated
SSD drive for that purpose you gain the ability of controlling the size
of the SSD portion. I was hoping the RAID array could use the SSD more
as a cache then a redundant storage.
Mike Power
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 19.07.2011 20:27:26 von Roberto Spadim
no this not work very fast...
use bcache or other similar ssd+harddisk cache solution (facebook
have one solution i don=B4t remember the name)
mixing ssd and hdd are a big problem, since write speed is the slowest
(maybe ssd maybe hdd) and read speed isn=B4t very well since read in ss=
d
can be non sequencial and at hdd sequencial is prefered, the read
balance algorithm is tuned to use the near position device, and
sometimes ssd isn=B4t used, or less used or more used...
a bigger speed can be done using a cache solution (ssd+hdd or ssd+raid1=
(hdd))
that was my test results... maybe someone have others results
2011/7/19 Mike Power :
> Is it possible to implement a RAID 1 array using two equal size HDD a=
nd one
> smaller and faster SSD. =A0The idea being that the resulting RAID wou=
ld have
> the same size of the HDD while picking up the speed benefits of the S=
SD.
> =A0You can do something similar today by just buying two hybrid drive=
s and
> putting them in an array. =A0By purchasing a dedicated SSD drive for =
that
> purpose you gain the ability of controlling the size of the SSD porti=
on. =A0I
> was hoping the RAID array could use the SSD more as a cache then a re=
dundant
> storage.
>
> Mike Power
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 19.07.2011 20:32:35 von Roman Mamedov
--Sig_/5JP7jbvveNrZzzJfNff52NF
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
On Tue, 19 Jul 2011 11:15:22 -0700
Mike Power wrote:
> Is it possible to implement a RAID 1 array using two equal size HDD and=20
> one smaller and faster SSD. The idea being that the resulting RAID=20
> would have the same size of the HDD while picking up the speed benefits=20
> of the SSD.
See http://bcache.evilpiepirate.org/
--=20
With respect,
Roman
--Sig_/5JP7jbvveNrZzzJfNff52NF
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEARECAAYFAk4lzcMACgkQTLKSvz+PZwjSqACfYjPUd3eH2/BfJ0i7+Gpl MrtT
JAYAn1irMMfeVSMcNELaotVFTrVEHrVV
=XlsM
-----END PGP SIGNATURE-----
--Sig_/5JP7jbvveNrZzzJfNff52NF--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 19.07.2011 21:13:57 von Mike Power
Thanks for the link. That is the kind of thing I am looking for.
On 07/19/2011 11:32 AM, Roman Mamedov wrote:
> On Tue, 19 Jul 2011 11:15:22 -0700
> Mike Power wrote:
>
>> Is it possible to implement a RAID 1 array using two equal size HDD and
>> one smaller and faster SSD. The idea being that the resulting RAID
>> would have the same size of the HDD while picking up the speed benefits
>> of the SSD.
> See http://bcache.evilpiepirate.org/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: RAID 1 using SSD and 2 HDD
am 20.07.2011 14:59:04 von brian.foster
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Mike Power
> Sent: Tuesday, July 19, 2011 3:14 PM
> To: Roman Mamedov
> Cc: linux-raid@vger.kernel.org
> Subject: Re: RAID 1 using SSD and 2 HDD
>
> Thanks for the link. That is the kind of thing I am looking for.
>
> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
> > On Tue, 19 Jul 2011 11:15:22 -0700
> > Mike Power wrote:
> >
> >> Is it possible to implement a RAID 1 array using two equal size HDD
> >> and one smaller and faster SSD. The idea being that the resulting
> >> RAID would have the same size of the HDD while picking up the speed
> >> benefits of the SSD.
> > See http://bcache.evilpiepirate.org/
> >
Also, Roberto referred to the facebook flashcache implementation. It is based on device-mapper and last I tried bcache, probably a bit more production-worthy at the moment (though bcache looks intriguing long term, so I'd suggest to try both and draw your own conclusion):
https://github.com/facebook/flashcache
Brian
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 28.07.2011 20:31:10 von Doug Ledford
On 07/20/2011 08:59 AM, brian.foster@emc.com wrote:
>> -----Original Message-----
>> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>> owner@vger.kernel.org] On Behalf Of Mike Power
>> Sent: Tuesday, July 19, 2011 3:14 PM
>> To: Roman Mamedov
>> Cc: linux-raid@vger.kernel.org
>> Subject: Re: RAID 1 using SSD and 2 HDD
>>
>> Thanks for the link. That is the kind of thing I am looking for.
>>
>> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
>>> On Tue, 19 Jul 2011 11:15:22 -0700
>>> Mike Power wrote:
>>>
>>>> Is it possible to implement a RAID 1 array using two equal size HDD
>>>> and one smaller and faster SSD. The idea being that the resulting
>>>> RAID would have the same size of the HDD while picking up the speed
>>>> benefits of the SSD.
>>> See http://bcache.evilpiepirate.org/
>>>
>
> Also, Roberto referred to the facebook flashcache implementation. It is based on device-mapper and last I tried bcache, probably a bit more production-worthy at the moment (though bcache looks intriguing long term, so I'd suggest to try both and draw your own conclusion):
>
> https://github.com/facebook/flashcache
Having not looked at those two, I can say that an md raid1 with two hard
drives and one SSD works *very* well. It's blazing fast. Here's how I
set mine up:
SSD: three partitions, one for boot, one for /, and one for ~/repos
(which is where all my git/cvs/etc. checkouts reside)
hard disks: four partitions, one for boot, one for /, one for /home, one
for ~/repos
Then I created four raid1 arrays like so:
mdadm -C /dev/md/boot -l1 -n3 -e1.0 --bitmap=internal --name=boot
/dev/sda1 --write-mostly --write-behind=128 /dev/sdb1 /dev/sdc1
mdadm -C /dev/md/root -l1 -n3 -e1.2 --bitmap=internal --name=root
/dev/sda2 --write-mostly --write-behind=1024 /dev/sdb2 /dev/sdc2
mdadm -C /dev/md/home -l1 -n2 -e1.2 --bitmap=internal --name=home
/dev/sdb3 /dev/sdc3
mdadm -C /dev/md/repos -l1 -n3 -e1.2 --bitmap=internal --name=repos
/dev/sda4 --write-mostly --write-behind=1024 /dev/sdb4 /dev/sdc4
Works for me with stellar performance. Treats the SSD as the only
device that matters on the three arrays it participates in with the hard
drives there merely as a backing store for safety in case the SSD blows
chunks some day. Obviously, if you need some other aspect of your home
directory to have the SSD benefit then modify to your tastes, but all my
scratch builds happen under ~/repos and the thing flies when compiling
stuff compared to how it used to be.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 28.07.2011 20:32:56 von Doug Ledford
On 07/19/2011 02:27 PM, Roberto Spadim wrote:
> no this not work very fast...
> use bcache or other similar ssd+harddisk cache solution (facebook
> have one solution i don=B4t remember the name)
> mixing ssd and hdd are a big problem, since write speed is the slowes=
t
> (maybe ssd maybe hdd) and read speed isn=B4t very well since read in =
ssd
> can be non sequencial and at hdd sequencial is prefered, the read
> balance algorithm is tuned to use the near position device, and
> sometimes ssd isn=B4t used, or less used or more used...
> a bigger speed can be done using a cache solution (ssd+hdd or ssd+rai=
d1(hdd))
> that was my test results... maybe someone have others results
You needed to use write-mostly, that would have drastically altered you=
r=20
results. With write-mostly and write-behind enabled on the hard drives=
,=20
all reads go to the SSD instead of the hard drives, and writes complete=
=20
as soon as the SSD says it is done and the writes to the hard drives=20
happen in the background.
>
> 2011/7/19 Mike Power:
>> Is it possible to implement a RAID 1 array using two equal size HDD =
and one
>> smaller and faster SSD. The idea being that the resulting RAID woul=
d have
>> the same size of the HDD while picking up the speed benefits of the =
SSD.
>> You can do something similar today by just buying two hybrid drive=
s and
>> putting them in an array. By purchasing a dedicated SSD drive for t=
hat
>> purpose you gain the ability of controlling the size of the SSD port=
ion. I
>> was hoping the RAID array could use the SSD more as a cache then a r=
edundant
>> storage.
>>
>> Mike Power
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid=
" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 01:53:44 von Xavier Brochard
Le jeudi 28 juillet 2011 20:31:10 Doug Ledford, vous avez =E9crit :
> On 07/20/2011 08:59 AM, brian.foster@emc.com wrote:
> >> -----Original Message-----
> >> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> >> owner@vger.kernel.org] On Behalf Of Mike Power
> >> Sent: Tuesday, July 19, 2011 3:14 PM
> >> To: Roman Mamedov
> >> Cc: linux-raid@vger.kernel.org
> >> Subject: Re: RAID 1 using SSD and 2 HDD
> >>=20
> >> Thanks for the link. That is the kind of thing I am looking for.
> >>=20
> >> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
> >>> On Tue, 19 Jul 2011 11:15:22 -0700
> >>>=20
> >>> Mike Power wrote:
> >>>> Is it possible to implement a RAID 1 array using two equal size =
HDD
> >>>> and one smaller and faster SSD. The idea being that the resulti=
ng
> >>>> RAID would have the same size of the HDD while picking up the sp=
eed
> >>>> benefits of the SSD.
> >>>=20
> >>> See http://bcache.evilpiepirate.org/
> >=20
> > Also, Roberto referred to the facebook flashcache implementation. I=
t is
> > based on device-mapper and last I tried bcache, probably a bit more
> > production-worthy at the moment (though bcache looks intriguing lon=
g
> > term, so I'd suggest to try both and draw your own conclusion):
> >=20
> > https://github.com/facebook/flashcache
>=20
> Having not looked at those two, I can say that an md raid1 with two h=
ard
> drives and one SSD works *very* well. It's blazing fast. Here's how=
I
> set mine up:
>=20
> SSD: three partitions, one for boot, one for /, and one for ~/repos
> (which is where all my git/cvs/etc. checkouts reside)
> hard disks: four partitions, one for boot, one for /, one for /home, =
one
> for ~/repos
>=20
> Then I created four raid1 arrays like so:
>=20
> mdadm -C /dev/md/boot -l1 -n3 -e1.0 --bitmap=3Dinternal --name=3Dboot
> /dev/sda1 --write-mostly --write-behind=3D128 /dev/sdb1 /dev/sdc1
> mdadm -C /dev/md/root -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Droot
> /dev/sda2 --write-mostly --write-behind=3D1024 /dev/sdb2 /dev/sdc2
> mdadm -C /dev/md/home -l1 -n2 -e1.2 --bitmap=3Dinternal --name=3Dhome
> /dev/sdb3 /dev/sdc3
> mdadm -C /dev/md/repos -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Drep=
os
> /dev/sda4 --write-mostly --write-behind=3D1024 /dev/sdb4 /dev/sdc4
>=20
> Works for me with stellar performance. Treats the SSD as the only
> device that matters on the three arrays it participates in with the h=
ard
> drives there merely as a backing store for safety in case the SSD blo=
ws
> chunks some day. Obviously, if you need some other aspect of your ho=
me
> directory to have the SSD benefit then modify to your tastes, but all=
my
> scratch builds happen under ~/repos and the thing flies when compilin=
g
> stuff compared to how it used to be.
One thing you didn't said is the respective size of the SSD and HD part=
itions.=20
How did you determine them?
Xavier
xavier@alternatif.org
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 11:32:02 von John Robinson
On 29/07/2011 00:53, Xavier Brochard wrote:
> Le jeudi 28 juillet 2011 20:31:10 Doug Ledford, vous avez =E9crit :
>> On 07/20/2011 08:59 AM, brian.foster@emc.com wrote:
[...]
>>>> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
[...]
>>>>> See http://bcache.evilpiepirate.org/
[...]
>>> https://github.com/facebook/flashcache
>>
>> Having not looked at those two, I can say that an md raid1 with two =
hard
>> drives and one SSD works *very* well. It's blazing fast. Here's ho=
w I
>> set mine up:
>>
>> SSD: three partitions, one for boot, one for /, and one for ~/repos
>> (which is where all my git/cvs/etc. checkouts reside)
>> hard disks: four partitions, one for boot, one for /, one for /home,=
one
>> for ~/repos
>>
>> Then I created four raid1 arrays like so:
>>
>> mdadm -C /dev/md/boot -l1 -n3 -e1.0 --bitmap=3Dinternal --name=3Dboo=
t
>> /dev/sda1 --write-mostly --write-behind=3D128 /dev/sdb1 /dev/sdc1
>> mdadm -C /dev/md/root -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Droo=
t
>> /dev/sda2 --write-mostly --write-behind=3D1024 /dev/sdb2 /dev/sdc2
>> mdadm -C /dev/md/home -l1 -n2 -e1.2 --bitmap=3Dinternal --name=3Dhom=
e
>> /dev/sdb3 /dev/sdc3
>> mdadm -C /dev/md/repos -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Dre=
pos
>> /dev/sda4 --write-mostly --write-behind=3D1024 /dev/sdb4 /dev/sdc4
>>
>> Works for me with stellar performance. Treats the SSD as the only
>> device that matters on the three arrays it participates in with the =
hard
>> drives there merely as a backing store for safety in case the SSD bl=
ows
>> chunks some day. Obviously, if you need some other aspect of your h=
ome
>> directory to have the SSD benefit then modify to your tastes, but al=
l my
>> scratch builds happen under ~/repos and the thing flies when compili=
ng
>> stuff compared to how it used to be.
>
> One thing you didn't said is the respective size of the SSD and HD pa=
rtitions.
> How did you determine them?
Since he's running RAID-1, the partitions on the SSD and HDDs must be=20
the same size. Note that the rest of the space on the HDDs was given to=
=20
/home and was not mirrored on the SSD.
Cheers,
John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 15:30:38 von Doug Ledford
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 7/28/2011 3:46 PM, Roberto Spadim wrote:
> hum make some benchmarks... about 4 months ago, i tested this with
> korn andreas and we got not astronomical speed with write behind or
> without it do you have some benchmarks to check?
No, not yet, but I'll make some up. The problem here is that the
perceived speed increase is only really visible in certain situations.
For example, if I'm building code, and repeatedly compiling, tweaking,
compiling, tweaking, etc., then the entire set of code ends up in page
cache and everything is done from memory, in which case the SSD makes no
performance difference what so ever. Any benchmark that operates
entirely in cache will be a useless measure here.
The biggest area where the SSD makes a difference is in reading pages
into page cache. Aka, cold cache reads. You see this immediately on
bootup for example. My machine boots fast enough that it makes it to a
login prompt before the ethernet device is done negotiating link speed
and presents a link up to the networking layer (the fact that my
workstation is Fedora 15 and uses upstart helps with this too, but on
another machine without an SSD and with Fedora 15 I don't get to login
prompt before ethernet link is up). It also effects things like the
startup speed of both Firefox and Thunderbird. This is where the cold
cache performance of an SSD helps.
However, your question did get me started thinking and raised a few
questions of my own (hence why I Cc:ed Neil).
In order to do writemostly, you *must* use a write bitmap. If you use
an internal bitmap, it exists on all devices. Normally, bitmap sets are
done synchronously. My question is: when one device is an SSD, do we
only wait on the SSD bitmap update before starting the SSD writes, or do
we wait on all the bitmap updates to complete before starting any of the
writes? If the later, could that be changed? And as a general question
about bitmap files instead of internal bitmaps, is it even possible to
use an external bitmap file on the root filesystem given that no other
filesystem is mounted in order to read a bitmap file prior to the root
filesystem going live? It would be nice to use an external bitmap file
on the SSD itself and skip the internal bitmap I think.
As a separate issue, I think a person could probably tweak a few things
for an SSD in the general block layer too. I haven't played with these
things yet, but I plan to. Things like changing the elevator to the
noop elevator.
Now, as for the differences between using an SSD like I am versus the
two cache type uses that were brought up.
The cache type usage has the benefit that it covers all the data on the
drives even if the SSD is smaller than the drives. Like any cache
though, it can only hold the most commonly/frequently used items. If
your list of commonly used stuff is too large for the SSD, it will start
to suffer cache misses and loose its benefit. On writes though it can
be very fast because it can simply buffer the write and then return to
the OS as though the write is complete. However, unless the caching
implementation waits for at least one drive to acknowledge and complete
the write (and especially if it accepts the write but only queues it up
for write to the drives and then waits some period of time before
flushing the write), then this represents a single point of failure that
could cause (possibly huge amounts of) data loss.
The setup I have essentially splits the data on my filesystem according
to what I want cached. I want applications so I get my performance
boost on startup. I want my source code repos so I can compile faster
and do things like git checkouts faster. But I don't need any mp3s or
video files or rarely accessed documents on the SSD, so having a
directory in my home directory to put all my stuff I want accessed fast
and the rest of my home directory just on hard drive works perfectly
well. In my usage, if the SSD fails, then I don't have to worry about
any data loss and the machine keeps chugging along.
Anyway, about the benchmarks, I'll see what I can do over the weekend.
Today, real work beckons ;-)
- --
Doug Ledford
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk4ytf4ACgkQQ9aEs6Ims9gHawCg2hM/pptUgMvY2unZiXgm gACm
2YkAoPre0he5O/+gLNi5qZFyl9F149wg
=6tvn
-----END PGP SIGNATURE-----
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 15:38:37 von Doug Ledford
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig5076EEFAEB8C3AAE4A3C0BDF
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
On 7/29/2011 5:32 AM, John Robinson wrote:
>> One thing you didn't said is the respective size of the SSD and HD
>> partitions.
>> How did you determine them?
>=20
> Since he's running RAID-1, the partitions on the SSD and HDDs must be
> the same size. Note that the rest of the space on the HDDs was given to=
> /home and was not mirrored on the SSD.
Exactly. I had a 128GB SSD, so I split it up more or less like so:
3GB to /boot (I run lots of test kernels)
20GB to / (more than enough in almost all cases)
90+GB to /home/dledford/repos
The hard drives were 500GB, so once I took out the 128 or so GB that was
mirrored to the SSD, that left another 375GB or so for the /home partitio=
n.
--=20
Doug Ledford
GPG KeyID: CFBFF194
http://people.redhat.com/dledford
Infiniband specific RPMs available at
http://people.redhat.com/dledford/Infiniband
--------------enig5076EEFAEB8C3AAE4A3C0BDF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.17 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk4yt90ACgkQQ9aEs6Ims9gChwCfZ+pvWhUsuYcJNnLSgKPZ sn0F
WX0AoMMmzedfQv0zXKrGSLJvM/cetHK5
=wL7S
-----END PGP SIGNATURE-----
--------------enig5076EEFAEB8C3AAE4A3C0BDF--
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 16:55:04 von David Brown
On 29/07/11 15:30, Doug Ledford wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 7/28/2011 3:46 PM, Roberto Spadim wrote:
>> hum make some benchmarks... about 4 months ago, i tested this with
>> korn andreas and we got not astronomical speed with write behind or
>> without it do you have some benchmarks to check?
>
> No, not yet, but I'll make some up. The problem here is that the
> perceived speed increase is only really visible in certain situations.
> For example, if I'm building code, and repeatedly compiling, tweaking,
> compiling, tweaking, etc., then the entire set of code ends up in page
> cache and everything is done from memory, in which case the SSD makes no
> performance difference what so ever. Any benchmark that operates
> entirely in cache will be a useless measure here.
>
> The biggest area where the SSD makes a difference is in reading pages
> into page cache. Aka, cold cache reads. You see this immediately on
> bootup for example. My machine boots fast enough that it makes it to a
> login prompt before the ethernet device is done negotiating link speed
> and presents a link up to the networking layer (the fact that my
> workstation is Fedora 15 and uses upstart helps with this too, but on
> another machine without an SSD and with Fedora 15 I don't get to login
> prompt before ethernet link is up). It also effects things like the
> startup speed of both Firefox and Thunderbird. This is where the cold
> cache performance of an SSD helps.
>
> However, your question did get me started thinking and raised a few
> questions of my own (hence why I Cc:ed Neil).
>
> In order to do writemostly, you *must* use a write bitmap. If you use
> an internal bitmap, it exists on all devices. Normally, bitmap sets are
> done synchronously. My question is: when one device is an SSD, do we
> only wait on the SSD bitmap update before starting the SSD writes, or do
> we wait on all the bitmap updates to complete before starting any of the
> writes? If the later, could that be changed? And as a general question
> about bitmap files instead of internal bitmaps, is it even possible to
> use an external bitmap file on the root filesystem given that no other
> filesystem is mounted in order to read a bitmap file prior to the root
> filesystem going live? It would be nice to use an external bitmap file
> on the SSD itself and skip the internal bitmap I think.
>
> As a separate issue, I think a person could probably tweak a few things
> for an SSD in the general block layer too. I haven't played with these
> things yet, but I plan to. Things like changing the elevator to the
> noop elevator.
>
> Now, as for the differences between using an SSD like I am versus the
> two cache type uses that were brought up.
>
> The cache type usage has the benefit that it covers all the data on the
> drives even if the SSD is smaller than the drives. Like any cache
> though, it can only hold the most commonly/frequently used items. If
> your list of commonly used stuff is too large for the SSD, it will start
> to suffer cache misses and loose its benefit. On writes though it can
> be very fast because it can simply buffer the write and then return to
> the OS as though the write is complete. However, unless the caching
> implementation waits for at least one drive to acknowledge and complete
> the write (and especially if it accepts the write but only queues it up
> for write to the drives and then waits some period of time before
> flushing the write), then this represents a single point of failure that
> could cause (possibly huge amounts of) data loss.
>
> The setup I have essentially splits the data on my filesystem according
> to what I want cached. I want applications so I get my performance
> boost on startup. I want my source code repos so I can compile faster
> and do things like git checkouts faster. But I don't need any mp3s or
> video files or rarely accessed documents on the SSD, so having a
> directory in my home directory to put all my stuff I want accessed fast
> and the rest of my home directory just on hard drive works perfectly
> well. In my usage, if the SSD fails, then I don't have to worry about
> any data loss and the machine keeps chugging along.
>
> Anyway, about the benchmarks, I'll see what I can do over the weekend.
> Today, real work beckons ;-)
>
One thing has occurred to me while reading this thread - it seems to be
an assumption here that SSD's are faster than HD's. In one area -
access times - SSD's are very much faster. But when transferring bulk
data, they are not necessarily faster. Certainly a pair of good hard
disks in RAID10,far will stream reads and writes with a similar
throughput to many SSD's. An ideal situation is therefore that small
reads will come from the SSD, but that bulk reads could come from any
disk that is currently idle. Enabling "write-behind" on the hard disks
would still be a big gain on the write latency.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 17:30:33 von Doug Ledford
On 07/29/2011 10:55 AM, David Brown wrote:
> One thing has occurred to me while reading this thread - it seems to be
> an assumption here that SSD's are faster than HD's. In one area - access
> times - SSD's are very much faster. But when transferring bulk data,
> they are not necessarily faster.
Not necessarily, no. But, if you do like I did and pick your SSD
carefully, they are. I specifically chose the one I did because it was
SATA-III with a 6MB/s link speed and it was rated for 400+MByte/s reads
and 210MByte/s writes. Here's the link:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820233 154
There are even faster ones out there now.
But, I was worried about the drive wearing out under the frequent
checkouts, builds, etc. so hence the reason I have two hard drives
backing it up. When it does wear out, I'll put a new one in, add it to
the array, wait for resync, all done.
> Certainly a pair of good hard disks in
> RAID10,far will stream reads and writes with a similar throughput to
> many SSD's. An ideal situation is therefore that small reads will come
> from the SSD, but that bulk reads could come from any disk that is
> currently idle. Enabling "write-behind" on the hard disks would still be
> a big gain on the write latency.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 17:51:29 von Roberto Spadim
about ssd/hdd
ssd is better for unaligned reads/writes (.1ms access time)
the problem is...
we don't have a 'math model' to simulate and select the best disk to se=
nd read
it's a bit difficult, and maybe we won't have one since we have many
kind of devices (nbd, ssd, hd, ssd+hd, sd cards, etc)
i think the only good improvement is allow better unaligned read to
ssd, and aligned reads to hdd
i tested some benchmarks ago and a better tunned read balance
algorithm can only allow 1% less time to stop benchmark (speed)
maybe flashcache and bcache can give more 'speed' i didn't tested
2011/7/29 Doug Ledford :
> On 07/29/2011 10:55 AM, David Brown wrote:
>>
>> One thing has occurred to me while reading this thread - it seems to=
be
>> an assumption here that SSD's are faster than HD's. In one area - ac=
cess
>> times - SSD's are very much faster. But when transferring bulk data,
>> they are not necessarily faster.
>
> Not necessarily, no. =A0But, if you do like I did and pick your SSD c=
arefully,
> they are. =A0I specifically chose the one I did because it was SATA-I=
II with a
> 6MB/s link speed and it was rated for 400+MByte/s reads and 210MByte/=
s
> writes. =A0Here's the link:
>
> http://www.newegg.com/Product/Product.aspx?Item=3DN82E168202 33154
>
> There are even faster ones out there now.
>
> But, I was worried about the drive wearing out under the frequent che=
ckouts,
> builds, etc. so hence the reason I have two hard drives backing it up=
=A0When
> it does wear out, I'll put a new one in, add it to the array, wait fo=
r
> resync, all done.
>
>> Certainly a pair of good hard disks in
>> RAID10,far will stream reads and writes with a similar throughput to
>> many SSD's. An ideal situation is therefore that small reads will co=
me
>> from the SSD, but that bulk reads could come from any disk that is
>> currently idle. Enabling "write-behind" on the hard disks would stil=
l be
>> a big gain on the write latency.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>
--=20
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Failures Rates Using SSD
am 29.07.2011 18:20:49 von Maurice
Toms Hardware have an article discussing SSD longevity
Bottom line is that the failure rates may be higher than most presume.
Conclusions on last 2 pages are fascinating to read.
http://www.tomshardware.com/reviews/ssd-reliability-failure- rate,2923.html
--
Cheers,
Maurice Hilarius
eMail: /mhilarius@gmail.com/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 19:50:22 von Xavier Brochard
Le vendredi 29 juillet 2011 11:32:02 John Robinson, vous avez =E9crit :
> On 29/07/2011 00:53, Xavier Brochard wrote:
> > Le jeudi 28 juillet 2011 20:31:10 Doug Ledford, vous avez =E9crit :
> >> On 07/20/2011 08:59 AM, brian.foster@emc.com wrote:
> [...]
>=20
> >>>> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
> [...]
>=20
> >>>>> See http://bcache.evilpiepirate.org/
>=20
> [...]
>=20
> >>> https://github.com/facebook/flashcache
> >>=20
> >> Having not looked at those two, I can say that an md raid1 with tw=
o hard
> >> drives and one SSD works *very* well. It's blazing fast. Here's =
how I
> >> set mine up:
> >>=20
> >> SSD: three partitions, one for boot, one for /, and one for ~/repo=
s
> >> (which is where all my git/cvs/etc. checkouts reside)
> >> hard disks: four partitions, one for boot, one for /, one for /hom=
e, one
> >> for ~/repos
> >>=20
> >> Then I created four raid1 arrays like so:
> >>=20
> >> mdadm -C /dev/md/boot -l1 -n3 -e1.0 --bitmap=3Dinternal --name=3Db=
oot
> >> /dev/sda1 --write-mostly --write-behind=3D128 /dev/sdb1 /dev/sdc1
> >> mdadm -C /dev/md/root -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Dr=
oot
> >> /dev/sda2 --write-mostly --write-behind=3D1024 /dev/sdb2 /dev/sdc2
> >> mdadm -C /dev/md/home -l1 -n2 -e1.2 --bitmap=3Dinternal --name=3Dh=
ome
> >> /dev/sdb3 /dev/sdc3
> >> mdadm -C /dev/md/repos -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3D=
repos
> >> /dev/sda4 --write-mostly --write-behind=3D1024 /dev/sdb4 /dev/sdc4
> >>=20
> >> Works for me with stellar performance. Treats the SSD as the only
> >> device that matters on the three arrays it participates in with th=
e hard
> >> drives there merely as a backing store for safety in case the SSD =
blows
> >> chunks some day. Obviously, if you need some other aspect of your=
home
> >> directory to have the SSD benefit then modify to your tastes, but =
all my
> >> scratch builds happen under ~/repos and the thing flies when compi=
ling
> >> stuff compared to how it used to be.
> >=20
> > One thing you didn't said is the respective size of the SSD and HD
> > partitions. How did you determine them?
>=20
> Since he's running RAID-1, the partitions on the SSD and HDDs must be
> the same size. Note that the rest of the space on the HDDs was given =
to
> /home and was not mirrored on the SSD.
Oh... I thought It was more like Mike's request:=20
"a RAID 1 array using two equal size HDD and one smaller and faster SSD=
"
Xavier
xavier@alternatif.org
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RAID 1 using SSD and 2 HDD
am 29.07.2011 20:31:08 von Doug Ledford
On 07/29/2011 01:50 PM, Xavier Brochard wrote:
> Le vendredi 29 juillet 2011 11:32:02 John Robinson, vous avez =E9crit=
:
>> On 29/07/2011 00:53, Xavier Brochard wrote:
>>> Le jeudi 28 juillet 2011 20:31:10 Doug Ledford, vous avez =E9crit :
>>>> On 07/20/2011 08:59 AM, brian.foster@emc.com wrote:
>> [...]
>>
>>>>>> On 07/19/2011 11:32 AM, Roman Mamedov wrote:
>> [...]
>>
>>>>>>> See http://bcache.evilpiepirate.org/
>>
>> [...]
>>
>>>>> https://github.com/facebook/flashcache
>>>>
>>>> Having not looked at those two, I can say that an md raid1 with tw=
o hard
>>>> drives and one SSD works *very* well. It's blazing fast. Here's =
how I
>>>> set mine up:
>>>>
>>>> SSD: three partitions, one for boot, one for /, and one for ~/repo=
s
>>>> (which is where all my git/cvs/etc. checkouts reside)
>>>> hard disks: four partitions, one for boot, one for /, one for /hom=
e, one
>>>> for ~/repos
>>>>
>>>> Then I created four raid1 arrays like so:
>>>>
>>>> mdadm -C /dev/md/boot -l1 -n3 -e1.0 --bitmap=3Dinternal --name=3Db=
oot
>>>> /dev/sda1 --write-mostly --write-behind=3D128 /dev/sdb1 /dev/sdc1
>>>> mdadm -C /dev/md/root -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3Dr=
oot
>>>> /dev/sda2 --write-mostly --write-behind=3D1024 /dev/sdb2 /dev/sdc2
>>>> mdadm -C /dev/md/home -l1 -n2 -e1.2 --bitmap=3Dinternal --name=3Dh=
ome
>>>> /dev/sdb3 /dev/sdc3
>>>> mdadm -C /dev/md/repos -l1 -n3 -e1.2 --bitmap=3Dinternal --name=3D=
repos
>>>> /dev/sda4 --write-mostly --write-behind=3D1024 /dev/sdb4 /dev/sdc4
>>>>
>>>> Works for me with stellar performance. Treats the SSD as the only
>>>> device that matters on the three arrays it participates in with th=
e hard
>>>> drives there merely as a backing store for safety in case the SSD =
blows
>>>> chunks some day. Obviously, if you need some other aspect of your=
home
>>>> directory to have the SSD benefit then modify to your tastes, but =
all my
>>>> scratch builds happen under ~/repos and the thing flies when compi=
ling
>>>> stuff compared to how it used to be.
>>>
>>> One thing you didn't said is the respective size of the SSD and HD
>>> partitions. How did you determine them?
>>
>> Since he's running RAID-1, the partitions on the SSD and HDDs must b=
e
>> the same size. Note that the rest of the space on the HDDs was given=
to
>> /home and was not mirrored on the SSD.
>
> Oh... I thought It was more like Mike's request:
> "a RAID 1 array using two equal size HDD and one smaller and faster S=
SD"
>
>
> Xavier
> xavier@alternatif.org
It is. The SSD is 128GB and the two HDs are 500GB. The /home partitio=
n=20
does not exist on the SSD and it exactly equals the amount of space lef=
t=20
over on the two HDs after they have had matching partitions created to=20
duplicate the partitions on the SSD.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html