Rotating RAID 1

Rotating RAID 1

am 15.08.2011 21:56:40 von jeromepoulin

Good evening,

I'm currently working on a project in which I use md-raid RAID1 with
Bitmap to "clone" my data from one disk to another and I would like to
know if this could cause corruption:

The system has 2 SATA ports which are hotplug capable.
I created 2 partitions, 1 system (2GB), 1 data (1TB+).
I created to RAID1 using:
mdadm --create /dev/md0 --raid-devices=2 --bitmap=internal
--bitmap-chunk=4096 --metadata=1.0 /dev/sd[ab]1
mdadm --create /dev/md1 --raid-devices=2 --bitmap=internal
--bitmap-chunk=65536 --metadata=1.0 /dev/sd[ab]2
Forced sync_parallel on the system disk to be sure it rebuild first.
Formatted system ext3 and data ext4.
Both mounted using data=writeback.

This system doesn't contain critical data but it contains backups on
the data partition. Once the data is in sync, I removed a disk and let
udev fail and remove the disk from the array, this is ArchLinux and
udev is set to mount the array using the incremental option, I added
--run to make sure it mounts even when a disk is missing. As of now,
eveything works as expected.

Then what is different about a standard RAID1, I removed sdb and
replaced it with a brand new disk, copied the partition template from
the other one and added the new disk using mdadm -a on both arrays, it
synced and works, then swapping the other disk back only rebuilds
according to the bitmap, however sometimes it appears to make a full
rebuild which is alright. However once, after a day of modifications
and weeks after setting-up this RAID, at least 100 GB, it took seconds
to rebuild and days later it appeared to have encountered corruption,
the kernel complained about bad extents and fsck found errors in one
of the file I know it had modified that day.

So the question is; Am I right to use md-raid to do this kind of
stuff, rsync is too CPU heavy for what I need and I need to stay
compatible with Windows thus choosing metadata 1.0.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 15.08.2011 22:19:41 von Phil Turmel

Hi Jérôme,

On 08/15/2011 03:56 PM, Jérôme Poulin wrote:
> Then what is different about a standard RAID1, I removed sdb and
> replaced it with a brand new disk, copied the partition template from
> the other one and added the new disk using mdadm -a on both arrays, i=
t
> synced and works, then swapping the other disk back only rebuilds
> according to the bitmap, however sometimes it appears to make a full
> rebuild which is alright. However once, after a day of modifications
> and weeks after setting-up this RAID, at least 100 GB, it took second=
s
> to rebuild and days later it appeared to have encountered corruption,
> the kernel complained about bad extents and fsck found errors in one
> of the file I know it had modified that day.

This is a problem. MD only knows about two disk. You have three. Whe=
n two disks are in place and sync'ed, the bitmaps will essentially stay=
cleared.

When you swap to the other disk, its bitmap is also clear, for the same=
reason. I'm sure mdadm notices the different event counts, but the cl=
ear bitmap would leave mdadm little or nothing to do to resync, as far =
as it knows. But lots of writes have happened in the meantime, and the=
y won't get copied to the freshly inserted drive. Mdadm will read from=
both disks in parallel when there are parallel workloads, so one workl=
oad would get current data and the other would get stale data.

If you perform a "check" pass after swapping and resyncing, I bet it fi=
nds many mismatches. It definitely can't work as described.

I'm not sure, but this might work if you could temporarily set it up as=
a triple mirror, so each disk has a unique slot/role.

It would also work if you didn't use a bitmap, as a re-inserted drive w=
ould simply be overwritten completely.

> So the question is; Am I right to use md-raid to do this kind of
> stuff, rsync is too CPU heavy for what I need and I need to stay
> compatible with Windows thus choosing metadata 1.0.

How do you stay compatible with Windows? If you let Windows write to a=
ny of these disks, you've corrupted that disk with respect to its peers=
Danger, Will Robinson!

HTH,

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 15.08.2011 22:21:09 von Pavel Hofman

Dne 15.8.2011 21:56, Jérôme Poulin napsal(a):
> Good evening,
>=20
> I'm currently working on a project in which I use md-raid RAID1 with
> Bitmap to "clone" my data from one disk to another and I would like t=
o
> know if this could cause corruption:
>=20
> The system has 2 SATA ports which are hotplug capable.
> I created 2 partitions, 1 system (2GB), 1 data (1TB+).
> I created to RAID1 using:
> mdadm --create /dev/md0 --raid-devices=3D2 --bitmap=3Dinternal
> --bitmap-chunk=3D4096 --metadata=3D1.0 /dev/sd[ab]1
> mdadm --create /dev/md1 --raid-devices=3D2 --bitmap=3Dinternal
> --bitmap-chunk=3D65536 --metadata=3D1.0 /dev/sd[ab]2
> Forced sync_parallel on the system disk to be sure it rebuild first.
> Formatted system ext3 and data ext4.
> Both mounted using data=3Dwriteback.
>=20
> This system doesn't contain critical data but it contains backups on
> the data partition. Once the data is in sync, I removed a disk and le=
t
> udev fail and remove the disk from the array, this is ArchLinux and
> udev is set to mount the array using the incremental option, I added
> --run to make sure it mounts even when a disk is missing. As of now,
> eveything works as expected.
>=20
> Then what is different about a standard RAID1, I removed sdb and
> replaced it with a brand new disk, copied the partition template from
> the other one and added the new disk using mdadm -a on both arrays, i=
t
> synced and works, then swapping the other disk back only rebuilds
> according to the bitmap, however sometimes it appears to make a full
> rebuild which is alright. However once, after a day of modifications
> and weeks after setting-up this RAID, at least 100 GB, it took second=
s
> to rebuild and days later it appeared to have encountered corruption,
> the kernel complained about bad extents and fsck found errors in one
> of the file I know it had modified that day.

Does your scenario involve using two "external" drives, being swapped
each time? I am using such setup, but in order to gain the bitmap
performance effects, I have to run two mirrored RAID1s, i.e. two
bitmaps, each for its corresponding external disk. This setup has been
working OK for a few years now.

Best regards,

Pavel.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 15.08.2011 22:23:33 von jeromepoulin

On Mon, Aug 15, 2011 at 4:19 PM, Phil Turmel wrote:
> It would also work if you didn't use a bitmap, as a re-inserted drive=
would simply be overwritten completely.

After reading this, I'll prefer wiping the bitmap rather than having
an horror story trying to restore that backup, I'll give it a try.

>
> How do you stay compatible with Windows?  If you let Windows wri=
te to any of these disks, you've corrupted that disk with respect to it=
s peers.  Danger, Will Robinson!

I am using ext4 driver in read only mode.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 15.08.2011 22:25:56 von jeromepoulin

On Mon, Aug 15, 2011 at 4:21 PM, Pavel Hofman wrote:
> Does your scenario involve using two "external" drives, being swapped
> each time?

Yes, exactly, 3 or more drive, one stays in place, and the others get
rotated off-site.

> I am using such setup, but in order to gain the bitmap
> performance effects, I have to run two mirrored RAID1s, i.e. two
> bitmaps, each for its corresponding external disk. This setup has been
> working OK for a few years now.

Did you script something that stops the RAID and re-assemble it? The
RAID must stay mounted in my case as there is live data (incremential
backups, so even if the last file is incomplete it is not a problem.)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 15.08.2011 22:42:06 von Pavel Hofman

Dne 15.8.2011 22:25, Jérôme Poulin napsal(a):
> On Mon, Aug 15, 2011 at 4:21 PM, Pavel Hofman om> wrote:
>> Does your scenario involve using two "external" drives, being swappe=
d
>> each time?
>=20
> Yes, exactly, 3 or more drive, one stays in place, and the others get
> rotated off-site.
>=20
>> I am using such setup, but in order to gain the bitmap
>> performance effects, I have to run two mirrored RAID1s, i.e. two
>> bitmaps, each for its corresponding external disk. This setup has be=
en
>> working OK for a few years now.
>=20
> Did you script something that stops the RAID and re-assemble it? The
> RAID must stay mounted in my case as there is live data (incremential
> backups, so even if the last file is incomplete it is not a problem.)

I am working on wiki description of our backup solution. The priorities
got re-organized recently, looks like I should finish it soon :-)

Yes, I have a script automatically re-assembling the array correspondin=
g
to the added drive and starting synchronization. There is another scrip=
t
checking synchronization status, run periodically from cron. When the
arrays are synced, it waits until the currently running backup job
finishes, shuts down the backup software (backuppc), unmounts the
filesystem to flush, removes the external drives from the array (we run
several external drives in raid0), does a few basic checks on the
external copy (mounting read-only, reading a directory) and puts the
external drives to sleep (hdparm -Y) for storing them outside of compan=
y
premises.

Give me a few days, I will finish the wiki page and send you a link.

Pavel.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 16.08.2011 00:42:51 von NeilBrown

On Mon, 15 Aug 2011 22:42:06 +0200 Pavel Hofman om>
wrote:

>=20
> Dne 15.8.2011 22:25, J=E9r=F4me Poulin napsal(a):
> > On Mon, Aug 15, 2011 at 4:21 PM, Pavel Hofman com> wrote:
> >> Does your scenario involve using two "external" drives, being swap=
ped
> >> each time?
> >=20
> > Yes, exactly, 3 or more drive, one stays in place, and the others g=
et
> > rotated off-site.
> >=20
> >> I am using such setup, but in order to gain the bitmap
> >> performance effects, I have to run two mirrored RAID1s, i.e. two
> >> bitmaps, each for its corresponding external disk. This setup has =
been
> >> working OK for a few years now.
> >=20
> > Did you script something that stops the RAID and re-assemble it? Th=
e
> > RAID must stay mounted in my case as there is live data (incrementi=
al
> > backups, so even if the last file is incomplete it is not a problem=
)
>=20
> I am working on wiki description of our backup solution. The prioriti=
es
> got re-organized recently, looks like I should finish it soon :-)
>=20
> Yes, I have a script automatically re-assembling the array correspond=
ing
> to the added drive and starting synchronization. There is another scr=
ipt
> checking synchronization status, run periodically from cron. When the
> arrays are synced, it waits until the currently running backup job
> finishes, shuts down the backup software (backuppc), unmounts the
> filesystem to flush, removes the external drives from the array (we r=
un
> several external drives in raid0), does a few basic checks on the
> external copy (mounting read-only, reading a directory) and puts the
> external drives to sleep (hdparm -Y) for storing them outside of comp=
any
> premises.
>=20
> Give me a few days, I will finish the wiki page and send you a link.
>=20

I'm not sure from you description whether the following describes exact=
ly
what you are doing or not, but this is how I would do it.
As you say, you need two bitmaps.

So if there are 3 drives A, X, Y where A is permanent and X and Y are r=
otated
off-site, then I create two RAID1s like this:


mdadm -C /dev/md0 -l1 -n2 --bitmap=3Dinternal /dev/A /dev/X
mdadm -C /dev/md1 -l1 -n2 --bitmap=3Dinternal /dev/md0 /dev/Y

mkfs /dev/md1; mount /dev/md1 ...


Then you can remove either or both of X and Y and which each is re-adde=
d it
will recover just the blocks that it needs. X from the bitmap of md0, =
Y from
the bitmap of md1.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 16.08.2011 01:32:04 von jeromepoulin

On Mon, Aug 15, 2011 at 6:42 PM, NeilBrown wrote:
> So if there are 3 drives A, X, Y where A is permanent and X and Y are rotated
> off-site, then I create two RAID1s like this:
>
>
> mdadm -C /dev/md0 -l1 -n2 --bitmap=internal /dev/A /dev/X
> mdadm -C /dev/md1 -l1 -n2 --bitmap=internal /dev/md0 /dev/Y

That seems nice for 2 disks, but adding another one later would be a
mess. Is there any way to play with slots number manually to make it
appear as an always degraded RAID ? I can't plug all the disks at once
because of the maximum of 2 ports.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 16.08.2011 01:55:17 von NeilBrown

On Mon, 15 Aug 2011 19:32:04 -0400 J=E9r=F4me Poulin l.com>
wrote:

> On Mon, Aug 15, 2011 at 6:42 PM, NeilBrown wrote:
> > So if there are 3 drives A, X, Y where A is permanent and X and Y a=
re rotated
> > off-site, then I create two RAID1s like this:
> >
> >
> > mdadm -C /dev/md0 -l1 -n2 --bitmap=3Dinternal /dev/A /dev/X
> > mdadm -C /dev/md1 -l1 -n2 --bitmap=3Dinternal /dev/md0 /dev/Y
>=20
> That seems nice for 2 disks, but adding another one later would be a
> mess. Is there any way to play with slots number manually to make it
> appear as an always degraded RAID ? I can't plug all the disks at onc=
e
> because of the maximum of 2 ports.

Yes, add another one later would be difficult. But if you know up-fron=
t that
you will want three off-site devices it is easy.

You could

mdadm -C /dev/md0 -l1 -n2 -b internal /dev/A missing
mdadm -C /dev/md1 -l1 -n2 -b internal /dev/md0 missing
mdadm -C /dev/md2 -l1 -n2 -b internal /dev/md1 missing
mdadm -C /dev/md3 -l1 -n2 -b internal /dev/md2 missing

mkfs /dev/md3 ; mount ..

So you now have 4 "missing" devices. Each time you plug in a device t=
hat
hasn't been in an array before, explicitly add it to the array that yo=
u want
it to be a part of and let it recover.
When you plug in a device that was previously plugged in, just "mdadm
-I /dev/XX" and it will automatically be added and recover based on th=
e
bitmap.

You can have as many or as few of the transient drives plugged in at a=
ny
time as you like.

There is a cost here of course. Every write potentially needs to upda=
te
every bitmap, so the more bitmaps, the more overhead in updating them.=
So
don't create more than you need.

Also, it doesn't have to be a linear stack. It could be a binary tree
though that might take a little more care to construct. Then when an
adjacent pair of leafs are both off-site, their bitmap would not need
updating.

NeilBrown

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 16.08.2011 06:36:18 von Maurice

On 8/15/2011 4:42 PM, NeilBrown wrote:
> ..I'm not sure from you description whether the following describes
> exactly
> what you are doing or not, but this is how I would do it.
> As you say, you need two bitmaps.
> So if there are 3 drives A, X, Y where A is permanent and X and Y are
> rotated off-site,
> then I create two RAID1s like this:
> mdadm -C /dev/md0 -l1 -n2 --bitmap=internal /dev/A /dev/X
> mdadm -C /dev/md1 -l1 -n2 --bitmap=internal /dev/md0 /dev/Y
>
> mkfs /dev/md1; mount /dev/md1 ...
>
> Then you can remove either or both of X and Y and which each is
> re-added it will
> recover just the blocks that it needs.
> X from the bitmap of md0, Y from the bitmap of md1.
>
> NeilBrown


How elegantly described.

After so many instances of being told "You should not use RAID as a
backup device like that!"
it is pleasant to hear you detail the "right way" to do this.

Thank you very much for that Neil.


--
Cheers,
Maurice Hilarius
eMail: /mhilarius@gmail.com/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 16.08.2011 08:34:46 von Pavel Hofman

Dne 16.8.2011 01:55, NeilBrown napsal(a):
>
> Also, it doesn't have to be a linear stack. It could be a binary tree
> though that might take a little more care to construct.

Since our backup server being a critical resource needs redundancy
itself, we are running two degraded RAID1s in parallel, using two
internal drives. The two alternating external drives plug into the
corresponding bitmap-enabled RAID1.

Pavel.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 23.08.2011 05:45:53 von jeromepoulin

On Mon, Aug 15, 2011 at 7:55 PM, NeilBrown wrote:
> Yes, add another one later would be difficult.  But if you know =
up-front that
> you will want three off-site devices it is easy.
>
> You could
>
>  mdadm -C /dev/md0 -l1 -n2 -b internal /dev/A missing
>  mdadm -C /dev/md1 -l1 -n2 -b internal /dev/md0 missing
>  mdadm -C /dev/md2 -l1 -n2 -b internal /dev/md1 missing
>  mdadm -C /dev/md3 -l1 -n2 -b internal /dev/md2 missing
>
>  mkfs /dev/md3 ; mount ..
>
>  So you now have 4 "missing" devices.

Alright, so I tried that on my project, being a low-end device is
resulted in about 30-40% performance lost with 8 MDs (planning in
advance), I tried disabling all bitmap to see if it helps and I get
minimal performance gain. Is there anything I should tune in this
case?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 23.08.2011 05:58:12 von NeilBrown

On Mon, 22 Aug 2011 23:45:53 -0400 J=E9r=F4me Poulin l.com>
wrote:

> On Mon, Aug 15, 2011 at 7:55 PM, NeilBrown wrote:
> > Yes, add another one later would be difficult. =A0But if you know u=
p-front that
> > you will want three off-site devices it is easy.
> >
> > You could
> >
> > =A0mdadm -C /dev/md0 -l1 -n2 -b internal /dev/A missing
> > =A0mdadm -C /dev/md1 -l1 -n2 -b internal /dev/md0 missing
> > =A0mdadm -C /dev/md2 -l1 -n2 -b internal /dev/md1 missing
> > =A0mdadm -C /dev/md3 -l1 -n2 -b internal /dev/md2 missing
> >
> > =A0mkfs /dev/md3 ; mount ..
> >
> > =A0So you now have 4 "missing" devices.
>=20
> Alright, so I tried that on my project, being a low-end device is
> resulted in about 30-40% performance lost with 8 MDs (planning in
> advance), I tried disabling all bitmap to see if it helps and I get
> minimal performance gain. Is there anything I should tune in this
> case?

More concrete details would help...

So you have 8 MD RAID1s each with one missing device and the other devi=
ce is
the next RAID1 down in the stack, except that last RAID1 where the one =
device
is a real device.

And in some unspecified test the RAID1 at the top of the stack gives 2/=
3 the
performance of the plain device? This the same when all bitmaps are
removed.

Certainly seems strange.

Can you give details of the test and numbers etc.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 23.08.2011 06:05:47 von jeromepoulin

On Mon, Aug 22, 2011 at 11:58 PM, NeilBrown wrote:
> More concrete details would help...

Sorry, you're right, I though it could have been something fast.
I have details for the first test I made with 15 RAIDs.

>
> So you have 8 MD RAID1s each with one missing device and the other de=
vice is
> the next RAID1 down in the stack, except that last RAID1 where the on=
e device
> is a real device.

Exactly, only 1 real device at the moment.

>
> And in some unspecified test the RAID1 at the top of the stack gives =
2/3 the
> performance of the plain device?  This the same when all bitmaps=
are
> removed.
>
> Certainly seems strange.
>
> Can you give details of the test and numbers etc.

So the test is a backup, Veeam exactly, using Samba 3.6.0 with brand
new SMB2 protocol, bitmaps are removed.
The backup took 45 minutes instead of 14 to 22 minutes.

Here is a sample of iostat showing the average queue size increasing
by RAID devices:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sdb 0.00 35.67 0.00 27.00 0.00 5579.00
413.26 2.01 74.69 0.00 74.69 34.32 92.67
md64 0.00 0.00 0.00 61.33 0.00 5577.00
181.86 0.00 0.00 0.00 0.00 0.00 0.00
md65 0.00 0.00 0.00 60.00 0.00 5574.67
185.82 0.00 0.00 0.00 0.00 0.00 0.00
md66 0.00 0.00 0.00 58.67 0.00 5572.33
189.97 0.00 0.00 0.00 0.00 0.00 0.00
md67 0.00 0.00 0.00 58.67 0.00 5572.33
189.97 0.00 0.00 0.00 0.00 0.00 0.00
md68 0.00 0.00 0.00 58.67 0.00 5572.33
189.97 0.00 0.00 0.00 0.00 0.00 0.00
md69 0.00 0.00 0.00 58.67 0.00 5572.33
189.97 0.00 0.00 0.00 0.00 0.00 0.00
md70 0.00 0.00 0.00 58.33 0.00 5572.00
191.04 0.00 0.00 0.00 0.00 0.00 0.00
md71 0.00 0.00 0.00 57.00 0.00 5569.67
195.43 0.00 0.00 0.00 0.00 0.00 0.00
md72 0.00 0.00 0.00 55.67 0.00 5567.33
200.02 0.00 0.00 0.00 0.00 0.00 0.00
md73 0.00 0.00 0.00 54.33 0.00 5565.00
204.85 0.00 0.00 0.00 0.00 0.00 0.00
md74 0.00 0.00 0.00 53.00 0.00 5562.67
209.91 0.00 0.00 0.00 0.00 0.00 0.00
md75 0.00 0.00 0.00 51.67 0.00 5560.33
215.24 0.00 0.00 0.00 0.00 0.00 0.00
md76 0.00 0.00 0.00 50.33 0.00 5558.00
220.85 0.00 0.00 0.00 0.00 0.00 0.00
md77 0.00 0.00 0.00 49.00 0.00 5555.67
226.76 0.00 0.00 0.00 0.00 0.00 0.00
md78 0.00 0.00 0.00 47.67 0.00 5553.33
233.01 0.00 0.00 0.00 0.00 0.00 0.00
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 24.08.2011 04:28:31 von jeromepoulin

On Tue, Aug 23, 2011 at 12:05 AM, Jérôme Poulin @gmail.com> wrote:
> On Mon, Aug 22, 2011 at 11:58 PM, NeilBrown wrote:
>> So you have 8 MD RAID1s each with one missing device and the other d=
evice is
>> the next RAID1 down in the stack, except that last RAID1 where the o=
ne device
>> is a real device.
>

More tests revealed nothing very consistent... however, there is a
consistent performance degradation on our backups when using multiple
RAID devices, backup is every 2 hours and it is really slower.

Here are the results of bonnie++ which only show degradation of per
char even if I know it is not really significant.
Rewrite was going down more and more until it went back up for no
reason, really weird unexplainable results.
=46irst line is from raw device, sdb2, then from md device, then
incrementally more md devices in series.

-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- -=
-Seeks--
GANAS0202 300M 5547 93 76052 54 26862 34 5948 99 80050 49 1=
75.6 2
GANAS0202 300M 5455 92 72428 52 26787 35 5847 97 75833 49 1=
66.3 2
GANAS0202 300M 5401 91 71860 52 27100 35 5820 97 79219 53 1=
56.2 2
GANAS0202 300M 5315 90 71488 51 22472 30 5673 94 73707 51 1=
62.5 2
GANAS0202 300M 5159 87 67984 50 22860 31 5642 94 78829 54 1=
38.6 2
GANAS0202 300M 5033 85 67091 48 22189 30 5458 91 76586 55 1=
49.3 2
GANAS0202 300M 4904 83 65626 47 24602 34 5425 91 72349 52 1=
12.9 2
GANAS0202 300M 4854 82 66664 48 24937 35 5120 85 75008 56 1=
49.1 2
GANAS0202 300M 4732 80 66429 48 25646 37 5296 88 75137 57 1=
45.7 2
GANAS0202 300M 4246 71 69589 51 25112 36 5031 84 78260 61 1=
36.2 2
GANAS0202 300M 4253 72 70190 52 27121 40 5194 87 77648 61 1=
07.5 2
GANAS0202 300M 4112 69 76360 55 23852 35 4827 81 74005 59 1=
18.9 2
GANAS0202 300M 3987 67 62689 47 22475 33 4971 83 74315 61 =
97.6 2
GANAS0202 300M 3912 66 69769 51 22221 33 4979 83 74631 62 1=
14.9 2
GANAS0202 300M 3602 61 52773 38 25944 40 4953 83 77794 65 1=
25.4 2
GANAS0202 300M 3580 60 58728 43 22855 35 4680 79 74244 64 1=
55.2 3
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 10.09.2011 00:28:56 von Bill Davidsen

Pavel Hofman wrote:
> Dne 16.8.2011 01:55, NeilBrown napsal(a):
>
>> Also, it doesn't have to be a linear stack. It could be a binary tree
>> though that might take a little more care to construct.
>>
> Since our backup server being a critical resource needs redundancy
> itself, we are running two degraded RAID1s in parallel, using two
> internal drives. The two alternating external drives plug into the
> corresponding bitmap-enabled RAID1.
>

I wonder if you could use a four device raid1 here, two drives
permanently installed and two being added one at a time to the array.
That gives you internal redundancy and recent backups as well.

I'm still a bit puzzled about the idea of rsync being too much CPU
overhead, but I'll pass on that. The issue I have had with raid1 for a
backup is that the data isn't always in a logical useful state when you
do physical backup. Do thing with scripts and hope you always run the
right one.

--
Bill Davidsen
We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination. -me, 2010



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 11.09.2011 21:21:27 von Pavel Hofman

Dne 10.9.2011 00:28, Bill Davidsen napsal(a):
> Pavel Hofman wrote:
>> Dne 16.8.2011 01:55, NeilBrown napsal(a):
>>
>>> Also, it doesn't have to be a linear stack. It could be a binary tree
>>> though that might take a little more care to construct.
>>>
>> Since our backup server being a critical resource needs redundancy
>> itself, we are running two degraded RAID1s in parallel, using two
>> internal drives. The two alternating external drives plug into the
>> corresponding bitmap-enabled RAID1.
>>
>
> I wonder if you could use a four device raid1 here, two drives
> permanently installed and two being added one at a time to the array.
> That gives you internal redundancy and recent backups as well.

I am not sure you could employ the write-intent bitmap then. And the
bitmap makes the backup considerably faster.

>
> I'm still a bit puzzled about the idea of rsync being too much CPU
> overhead, but I'll pass on that. The issue I have had with raid1 for a
> backup is that the data isn't always in a logical useful state when you
> do physical backup. Do thing with scripts and hope you always run the
> right one.


I am afraid I do not understand exactly what you mean :-) We have a few
scripts, but only one is started manually, the rest is taken care of
automatically.

Pavel.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Rotating RAID 1

am 12.09.2011 16:20:34 von Bill Davidsen

Pavel Hofman wrote:
> Dne 10.9.2011 00:28, Bill Davidsen napsal(a):
>
>> Pavel Hofman wrote:
>>
>>> Dne 16.8.2011 01:55, NeilBrown napsal(a):
>>>
>>>
>>>> Also, it doesn't have to be a linear stack. It could be a binary tree
>>>> though that might take a little more care to construct.
>>>>
>>>>
>>> Since our backup server being a critical resource needs redundancy
>>> itself, we are running two degraded RAID1s in parallel, using two
>>> internal drives. The two alternating external drives plug into the
>>> corresponding bitmap-enabled RAID1.
>>>
>>>
>> I wonder if you could use a four device raid1 here, two drives
>> permanently installed and two being added one at a time to the array.
>> That gives you internal redundancy and recent backups as well.
>>
> I am not sure you could employ the write-intent bitmap then. And the
> bitmap makes the backup considerably faster.
>

With --bitmap=internal you should have all of the information you need
to do fast recovery, but I may misunderstand internal bitmap and
possibly incremental build. What I proposed was creating the array as
dev1 dev2 dev3 missing, then dev3 or dev4 could be added and brought up
to current independently because they would be separate devices.

--
Bill Davidsen
We are not out of the woods yet, but we know the direction and have
taken the first step. The steps are many, but finite in number, and if
we persevere we will reach our destination. -me, 2010



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html