Yahoo Gmail Google Facebook Delicious Twitter Reddit Stumpleupon Myspace Digg

Search queries

Wwwwxxx reemine, WWWXXX.VCBA, WWWXXX.VCBA, TheboL.wwwxxxxx, WWWXXXAPC , wwwxxn.xsss, wwwxxxapc, WWWXXX.VCBA, wwwxxx vba, wwwxxxdoco



#1: SSDs vs. md/sync_speed_(min|max)

Posted on 2011-07-18 20:42:04 by Lutz Vieweg


just wanted to report some unexpected effects of the md/sync_speed_(min|max)
parameters when using MD RAID1 on SSDs:

While syncing from one SSD to another in a MD RAID1 (that was in use at that
time, but still had plenty of spare I/O bandwidth to spend), I wanted to
increase the speed of that process. But md/sync_speed_max was already much
higher (200 MB/s) than the actual sync speed that MD reported (~ 30MB/s).
At the same time, the utilization of the SSD to read from was only at ~5 %.

So I wondered what was keeping MD from just reading faster.

I can only assume that MD is tuned for magnetic disks in that it assumes any
additional "reads" to some location on the source device will be pretty expensive,
so it is cautious about causing them.
But for SSDs, there's no extra cost for "seeking", so it would be perfectly fine
if the utilization of the source-device was e.g. 95% while synchronizing.

And with SSDs, reading tends to incur less "utilization" (as reported
by e.g. "iostat -dx 3") than writing - even if sequential reads/writes
appear to reach similar speeds.

I could help myself by setting md/sync_speed_min to 200 MB/s. That accelerated
the sync a lot while the source device still remained usable by others (~30%

I would like to propose that if my assumption about the tuning for the penalty
of seeking on magnetic disks is right, implementation of some "SSD"-flag could
be a good idea to tune the algorithm otherwise.


Lutz Vieweg

To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to
More majordomo info at

Report this message